Will AI Become the New McKinsey?

Sci-fi writer Ted Chiang, writing for New Yorker: So, I would like to propose another metaphor for the risks of artificial intelligence. I suggest that we think about A.I. as a management-consulting firm, along the lines of McKinsey & Company. Firms like McKinsey are hired for a wide variety of reasons, and A.I. systems are used for many reasons, too. But the similarities between McKinsey — a consulting firm that works with ninety per cent of the Fortune 100 — and A.I. are also clear. Social-media companies use machine learning to keep users glued to their feeds. In a similar way, Purdue Pharma used McKinsey to figure out how to “turbocharge” sales of OxyContin during the opioid epidemic. Just as A.I. promises to offer managers a cheap replacement for human workers, so McKinsey and similar firms helped normalize the practice of mass layoffs as a way of increasing stock prices and executive compensation, contributing to the destruction of the middle class in America.

A former McKinsey employee has described the company as “capital’s willing executioners”: if you want something done but don’t want to get your hands dirty, McKinsey will do it for you. That escape from accountability is one of the most valuable services that management consultancies provide. Bosses have certain goals, but don’t want to be blamed for doing what’s necessary to achieve those goals; by hiring consultants, management can say that they were just following independent, expert advice. Even in its current rudimentary form, A.I. has become a way for a company to evade responsibility by saying that it’s just doing what âoethe algorithmâ says, even though it was the company that commissioned the algorithm in the first place.

Read more of this story at Slashdot.

Racehorse Owned by 391 Users of a Sports-Investment App Wins Kentucky Derby

This year’s Kentucky Derby was won by a 15-to-1 longshot named Mage, reports sports magazine the Athletic. It was only the horse’s third race — but the story gets even stranger:
Mage is owned by a collective of horse racing fans that doesn’t follow your standard ownership structure. Fans can purchase shares of a horse for as little as $50. It’s a publicly owned racehorse. Mage has 391 owners on the Commonwealth app. Many were at the 2023 Kentucky Derby wearing Mage hats and they were shown on NBC’s lead up to the Kentucky Derby.
The Commonwealth app even tried to arrange watch parties for the horse race across the country. Shareholders own 25% of the racehorse, reports Sporting News:

Although horses are the most common commodity, there are golfers on the app as well, specifically Joey Vrzich and Cooper Dossey, whom it says are “coming Spring 2023.” Commonwealth has a partnership with WinStar Farm, a premiere breeder of thoroughbreds… “Together we are going to revolutionize the ownership experience and give the world access to horses that were once unavailable to the public,” Brian Doxtator, CEO of Commonwealth said, per the collective’s website.

According to Action Network’s Darren Rovell, Commonwealth owners will be paid out $94.52 per share (prior to taxes) for the win.

Before the race the Athletic sports magazine asked seven experts for their picks — and only one of them had even mentioned Mage. (The Athletic’s senior sports betting editor correctly picked Mage and “Angel of Empire” to finish somewhere in the top three positions.) But just for fun, the Athletic also asked three AI chatbots — and none of them picked the winner:

Bing AI: That’s a tough question. I don’t have a personal preference, but based on the odds and the past performances, I think Forte has a good chance of winning. He has been very consistent and impressive in his races, and he has a top trainer and jockey. He also has a favorable post position at No. 14, which gives him some room to maneuver and avoid traffic. But as I said, anything can happen in the Kentucky Derby, and there are many other talented horses in the field. It will be an exciting race to watch.

Google Bard: I like the chances of Tapit Trice to win the 2023 Kentucky Derby. He is the second-lowest odds horse at 5-1, and has won twice and finished second twice in six career races. He has one of the best pedigrees a horse could ask for, being sired by Justify, the 2018 Triple Crown winner. He is also coming off an impressive showing in the 2023 Blue Grass Stakes, where he finished second just barely behind Verifying.
Here are some other horses that I like to finish in the money… The Kentucky Derby is always a tough race to predict, but I think these horses have a good chance of winning or finishing in the money. [Bard then listed five more of the 18 horses — one of which ultimately finished third.]

Chat GPT: As an artificial intelligence language model, I don’t have personal opinions or preferences. However, based on the current information I have, I cannot accurately predict the winner of the Kentucky Derby. Horse racing involves many variables, including the skill of the jockey, the health and performance of the horse, and even the weather conditions on race day. The outcome of any given race is always uncertain and subject to chance. It’s important to note that gambling can be risky, and I don’t encourage it.

Read more of this story at Slashdot.

China’s AI Industry Barely Slowed By US Chip Export Rules

Export controls imposed by the U.S. on microchips, aiming to hinder China’s technological advancements, have had minimal effects on the country’s tech sector. While the restrictions have slowed down variants of Nvidia’s chips for the Chinese market, it has not halted China’s progress in areas like AI, as the reduced performance is still an improvement for Chinese firms, and researchers are finding ways to overcome the limitations. Reuters reports: Nvidia has created variants of its chips for the Chinese market that are slowed down to meet U.S. rules. Industry experts told Reuters the newest one – the Nvidia H800, announced in March – will likely take 10% to 30% longer to carry out some AI tasks and could double some costs compared with Nvidia’s fastest U.S. chips. Even the slowed Nvidia chips represent an improvement for Chinese firms. Tencent Holdings, one of China’s largest tech companies, in April estimated that systems using Nvidia’s H800 will cut the time it takes to train its largest AI system by more than half, from 11 days to four days. “The AI companies that we talk to seem to see the handicap as relatively small and manageable,” said Charlie Chai, a Shanghai-based analyst with 86Research.

Part of the U.S. strategy in setting the rules was to avoid such a shock that the Chinese would ditch U.S. chips altogether and redouble their own chip-development efforts. “They had to draw the line somewhere, and wherever they drew it, they were going to run into the challenge of how to not be immediately disruptive, but how to also over time degrade China’s capability,” said one chip industry executive who requested anonymity to talk about private discussions with regulators. The export restrictions have two parts. The first puts a ceiling on a chip’s ability to calculate extremely precise numbers, a measure designed to limit supercomputers that can be used in military research. Chip industry sources said that was an effective action. But calculating extremely precise numbers is less relevant in AI work like large language models where the amount of data the chip can chew through is more important. […] The second U.S. limit is on chip-to-chip transfer speeds, which does affect AI. The models behind technologies such as ChatGPT are too large to fit onto a single chip. Instead, they must be spread over many chips – often thousands at a time — which all need to communicate with one another.

Nvidia has not disclosed the China-only H800 chip’s performance details, but a specification sheet seen by Reuters shows a chip-to-chip speed of 400 gigabytes per second, less than half the peak speed of 900 gigabytes per second for Nvidia’s flagship H100 chip available outside China. Some in the AI industry believe that is still plenty of speed. Naveen Rao, chief executive of a startup called MosaicML that specializes in helping AI models to run better on limited hardware, estimated a 10-30% system slowdown. “There are ways to get around all this algorithmically,” he said. “I don’t see this being a boundary for a very long time — like 10 years.” Moreover, AI researchers are trying to slim down the massive systems they have built to cut the cost of training products similar to ChatGPT and other processes. Those will require fewer chips, reducing chip-to-chip communications and lessening the impact of the U.S. speed limits.

Read more of this story at Slashdot.

Google Has More Powerful AI, Says Engineer Fired Over Sentience Claims

Remember that Google engineer/AI ethicist who was fired last summer after claiming their LaMDA LLM had become sentient?

In a new interview with Futurism, Blake Lemoine now says the “best way forward” for humankind’s future relationship with AI is “understanding that we are dealing with intelligent artifacts. There’s a chance that — and I believe it is the case — that they have feelings and they can suffer and they can experience joy, and humans should at least keep that in mind when interacting with them.” (Although earlier in the interview, Lemoine concedes “Is there a chance that people, myself included, are projecting properties onto these systems that they don’t have? Yes. But it’s not the same kind of thing as someone who’s talking to their doll.”)

But he also thinks there’s a lot of research happening inside corporations, adding that “The only thing that has changed from two years ago to now is that the fast movement is visible to the public.” For example, Lemoine says Google almost released its AI-powered Bard chatbot last fall, but “in part because of some of the safety concerns I raised, they deleted it… I don’t think they’re being pushed around by OpenAI. I think that’s just a media narrative. I think Google is going about doing things in what they believe is a safe and responsible manner, and OpenAI just happened to release something.”

“[Google] still has far more advanced technology that they haven’t made publicly available yet. Something that does more or less what Bard does could have been released over two years ago. They’ve had that technology for over two years. What they’ve spent the intervening two years doing is working on the safety of it — making sure that it doesn’t make things up too often, making sure that it doesn’t have racial or gender biases, or political biases, things like that. That’s what they spent those two years doing…

“And in those two years, it wasn’t like they weren’t inventing other things. There are plenty of other systems that give Google’s AI more capabilities, more features, make it smarter. The most sophisticated system I ever got to play with was heavily multimodal — not just incorporating images, but incorporating sounds, giving it access to the Google Books API, giving it access to essentially every API backend that Google had, and allowing it to just gain an understanding of all of it. That’s the one that I was like, “you know this thing, this thing’s awake.” And they haven’t let the public play with that one yet. But Bard is kind of a simplified version of that, so it still has a lot of the kind of liveliness of that model…

“[W]hat it comes down to is that we aren’t spending enough time on transparency or model understandability. I’m of the opinion that we could be using the scientific investigative tools that psychology has come up with to understand human cognition, both to understand existing AI systems and to develop ones that are more easily controllable and understandable.”

So how will AI and humans will coexist? “Over the past year, I’ve been leaning more and more towards we’re not ready for this, as people,” Lemoine says toward the end of the interview. “We have not yet sufficiently answered questions about human rights — throwing nonhuman entities into the mix needlessly complicates things at this point in history.”

Read more of this story at Slashdot.

Can OpenAI Trademark ‘GPT’?

“ThreatGPT, MedicalGPT, DateGPT and DirtyGPT are a mere sampling of the many outfits to apply for trademarks with the United States Patent and Trademark Office in recent months,” notes TechCrunch, exploring the issue of whether OpenAI can actually trademark the phrase ‘GPT’…

Little wonder that after applying in late December for a trademark for “GPT,” which stands for “Generative Pre-trained Transformer,” OpenAI last month petitioned the USPTO to speed up the process, citing the “myriad infringements and counterfeit apps” beginning to spring into existence. Unfortunately for OpenAI, its petition was dismissed last week… Given the rest of the queue in which OpenAI finds itself, that means a decision could take up to five more months, says Jefferson Scher, a partner in the intellectual property group of Carr & Ferrell and chair of the firm’s trademark practice group. Even then, the outcome isn’t assured, Scher explains… [H]elpful, says Scher, is the fact that OpenAI has been using “GPT” for years, having released its original Generative Pre-trained Transformer model, or GPT-1, back in October 2018…

Even if a USPTO examiner has no problem with OpenAI’s application, it will be moved afterward to a so-called opposition period, where other market participants can argue why the agency should deny the “GPT” trademark. Scher describes what would follow this way: In the case of OpenAI, an opposer would challenge Open AI’s position that “GPT” is proprietary and that the public perceives it as such instead of perceiving the acronym to pertain to generative AI more broadly…

It all begs the question of why the company didn’t move to protect “GPT” sooner. Here, Scher speculates that the company was “probably caught off guard” by its own success… Another wrinkle here is that OpenAI may soon be so famous that its renown becomes a dominant factor, says Scher. While one doesn’t need to be famous to secure a trademark, once an outfit is widely enough recognized, it receives protection that extends far beyond its sphere. Rolex is too famous a trademark to be used on anything else, for instance.

Thanks to Slashdot reader rolodexter for sharing the article.

Read more of this story at Slashdot.

What Happens When You Put 25 ChatGPT-Backed Agents Into an RPG Town?

“A group of researchers at Stanford University and Google have created a miniature RPG-style virtual world similar to The Sims,” writes Ars Technica, “where 25 characters, controlled by ChatGPT and custom code, live out their lives independently with a high degree of realistic behavior.”

“Generative agents wake up, cook breakfast, and head to work; artists paint, while authors write; they form opinions, notice each other, and initiate conversations; they remember and reflect on days past as they plan the next day,” write the researchers in their paper… To pull this off, the researchers relied heavily on a large language model for social interaction, specifically the ChatGPT API. In addition, they created an architecture that simulates minds with memories and experiences, then let the agents loose in the world to interact…. To study the group of AI agents, the researchers set up a virtual town called “Smallville,” which includes houses, a cafe, a park, and a grocery store…. Interestingly, when the characters in the sandbox world encounter each other, they often speak to each other using natural language provided by ChatGPT. In this way, they exchange information and form memories about their daily lives.

When the researchers combined these basic ingredients together and ran the simulation, interesting things began to happen. In the paper, the researchers list three emergent behaviors resulting from the simulation. None of these were pre-programmed but rather resulted from the interactions between the agents. These included “information diffusion” (agents telling each other information and having it spread socially among the town), “relationship memory” (memory of past interactions between agents and mentioning those earlier events later), and “coordination” (planning and attending a Valentine’s Day party together with other agents)…. “Starting with only a single user-specified notion that one agent wants to throw a Valentine’s Day party,” the researchers write, “the agents autonomously spread invitations to the party over the next two days, make new acquaintances, ask each other out on dates to the party, and coordinate to show up for the party together at the right time….”

To get a look at Smallville, the researchers have posted an interactive demo online through a special website, but it’s a “pre-computed replay of a simulation” described in the paper and not a real-time simulation. Still, it gives a good illustration of the richness of social interactions that can emerge from an apparently simple virtual world running in a computer sandbox.
Interstingly, the researchers hired human evaluators to gauge how well the AI agents produced believable responses — and discovered they were more believable than when supplied their own responses.

Thanks to long-time Slashdot reader Baron_Yam for sharing the article.

Read more of this story at Slashdot.

How Should AI Be Regulated?

A New York Times opinion piece argues people in the AI industry “are desperate to be regulated, even if it slows them down. In fact, especially if it slows them down.” But how?

What they tell me is obvious to anyone watching. Competition is forcing them to go too fast and cut too many corners. This technology is too important to be left to a race between Microsoft, Google, Meta and a few other firms. But no one company can slow down to a safe pace without risking irrelevancy. That’s where the government comes in — or so they hope… [A]fter talking to a lot of people working on these problems and reading through a lot of policy papers imagining solutions, there are a few categories I’d prioritize.

The first is the question — and it is a question — of interpretability. As I said above, it’s not clear that interpretability is achievable. But without it, we will be turning more and more of our society over to algorithms we do not understand… The second is security. For all the talk of an A.I. race with China, the easiest way for China — or any country for that matter, or even any hacker collective — to catch up on A.I. is to simply steal the work being done here. Any firm building A.I. systems above a certain scale should be operating with hardened cybersecurity. It’s ridiculous to block the export of advanced semiconductors to China but to simply hope that every 26-year-old engineer at OpenAI is following appropriate security measures.

The third is evaluations and audits. This is how models will be evaluated for everything from bias to the ability to scam people to the tendency to replicate themselves across the internet. Right now, the testing done to make sure large models are safe is voluntary, opaque and inconsistent. No best practices have been accepted across the industry, and not nearly enough work has been done to build testing regimes in which the public can have confidence. That needs to change — and fast.
The piece also recommends that AI-design companies “bear at least some liability for what their models.” But what legislation should we see — and what legislation will we see? “One thing regulators shouldn’t fear is imperfect rules that slow a young industry,” the piece argues.

“For once, much of that industry is desperate for someone to help slow it down.”

Read more of this story at Slashdot.

‘Pausing AI Developments Isn’t Enough. We Need To Shut It All Down’

Earlier today, more than 1,100 artificial intelligence experts, industry leaders and researchers signed a petition calling on AI developers to stop training models more powerful than OpenAI’s ChatGPT-4 for at least six months. Among those who refrained from signing it was Eliezer Yudkowsky, a decision theorist from the U.S. and lead researcher at the Machine Intelligence Research Institute. He’s been working on aligning Artificial General Intelligence since 2001 and is widely regarded as a founder of the field.

“This 6-month moratorium would be better than no moratorium,” writes Yudkowsky in an opinion piece for Time Magazine. “I refrained from signing because I think the letter is understating the seriousness of the situation and asking for too little to solve it.” Yudkowsky cranks up the rhetoric to 100, writing: “If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.” Here’s an excerpt from his piece: The key issue is not “human-competitive” intelligence (as the open letter puts it); it’s what happens after AI gets to smarter-than-human intelligence. Key thresholds there may not be obvious, we definitely can’t calculate in advance what happens when, and it currently seems imaginable that a research lab would cross critical lines without noticing. […] It’s not that you can’t, in principle, survive creating something much smarter than you; it’s that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers. […]

It took more than 60 years between when the notion of Artificial Intelligence was first proposed and studied, and for us to reach today’s capabilities. Solving safety of superhuman intelligence — not perfect safety, safety in the sense of “not killing literally everyone” — could very reasonably take at least half that long. And the thing about trying this with superhuman intelligence is that if you get that wrong on the first try, you do not get to learn from your mistakes, because you are dead. Humanity does not learn from the mistake and dust itself off and try again, as in other challenges we’ve overcome in our history, because we are all gone.

Trying to get anything right on the first really critical try is an extraordinary ask, in science and in engineering. We are not coming in with anything like the approach that would be required to do it successfully. If we held anything in the nascent field of Artificial General Intelligence to the lesser standards of engineering rigor that apply to a bridge meant to carry a couple of thousand cars, the entire field would be shut down tomorrow. We are not prepared. We are not on course to be prepared in any reasonable time window. There is no plan. Progress in AI capabilities is running vastly, vastly ahead of progress in AI alignment or even progress in understanding what the hell is going on inside those systems. If we actually do this, we are all going to die. You can read the full letter signed by AI leaders here.

Read more of this story at Slashdot.