Apple’s Large Language Model Shows Up in New iOS Code

An anonymous reader shares a report: Apple is widely expected to unveil major new artificial intelligence features with iOS 18 in June. Code found by 9to5Mac in the first beta of iOS 17.4 shows that Apple is continuing to work on a new version of Siri powered by large language model technology, with a little help from other sources. In fact, Apple appears to be using OpenAI’s ChatGPT API for internal testing to help the development of its own AI models. According to this code, iOS 17.4 includes a new SiriSummarization private framework that makes calls to the OpenAI’s ChatGPT API. This appears to be something Apple is using for internal testing of its new AI features. There are multiple examples of system prompts for the SiriSummarization framework in iOS 17.4 as well. This includes things like “please summarize,” “please answer this questions,” and “please summarize the given text.”

Apple is unlikely to use OpenAI models to power any of its artificial intelligence features in iOS 18. Instead, what it’s doing here is testing its own AI models against ChatGPT. For example, the SiriSummarization framework can do summarization using on-device models. Apple appears to be using its own AI models to power this framework, then internally comparing its results against the results of ChatGPT. In total, iOS 17.4 code suggests Apple is testing four different AI models. This includes Apple’s internal model called “Ajax,” which Bloomberg has previously reported. iOS 17.4 shows that there are two versions of AjaxGPT, including one that is processed on-device and one that is not.

Read more of this story at Slashdot.

FTC Launches Inquiry Into AI Deals by Tech Giants

The Federal Trade Commission launched an inquiry (non-paywalled link) on Thursday into the multibillion-dollar investments by Microsoft, Amazon and Google in the artificial intelligence start-ups OpenAI and Anthropic, broadening the regulator’s efforts to corral the power the tech giants can have over A.I. The New York Times: These deals have allowed the big companies to form deep ties with their smaller rivals while dodging most government scrutiny. Microsoft has invested billions of dollars in OpenAI, the maker of ChatGPT, while Amazon and Google have each committed billions of dollars to Anthropic, another leading A.I. start-up.

Regulators have typically focused on bringing antitrust lawsuits against deals where the tech giants are buying rivals outright or using acquisitions to expand into new businesses, leading to increased prices and other harm, and have not regularly challenged stakes that the companies buy in start-ups. The F.T.C.’s inquiry will examine how these investment deals alter the competitive landscape and could inform any investigations by federal antitrust regulators into whether the deals have broken laws.

The F.T.C. said it would ask Microsoft, OpenAI, Amazon, Google and Anthropic to describe their influence over their partners and how they worked together to make decisions. It also said it would demand that they provide any internal documents that could shed light on the deals and their potential impact on competition.

Read more of this story at Slashdot.

OpenAI Quietly Scrapped a Promise To Disclose Key Documents To the Public

From its founding, OpenAI said its governing documents were available to the public. When WIRED requested copies after the company’s boardroom drama, it declined to provide them. Wired: Wealthy tech entrepreneurs including Elon Musk launched OpenAI in 2015 as a nonprofit research lab that they said would involve society and the public in the development of powerful AI, unlike Google and other giant tech companies working behind closed doors. In line with that spirit, OpenAI’s reports to US tax authorities have from its founding said that any member of the public can review copies of its governing documents, financial statements, and conflict of interest rules. But when WIRED requested those records last month, OpenAI said its policy had changed, and the company provided only a narrow financial statement that omitted the majority of its operations.

“We provide financial statements when requested,” company spokesperson Niko Felix says. “OpenAI aligns our practices with industry standards, and since 2022 that includes not publicly distributing additional internal documents.” OpenAI’s abandonment of the long-standing transparency pledge obscures information that could shed light on the recent near-implosion of a company with crucial influence over the future of AI and could help outsiders understand its vulnerabilities. In November, OpenAI’s board fired CEO Sam Altman, implying in a statement that he was untrustworthy and had endangered its mission to ensure AI “benefits all humanity.” An employee and investor revolt soon forced the board to reinstate Altman and eject most of its own members, with an overhauled slate of directors vowing to review the crisis and enact structural changes to win back the trust of stakeholders.

Read more of this story at Slashdot.

OpenAI Unveils Plans For Tackling Abuse Ahead of 2024 Elections

Sara Fischer reports via Axios: ChatGPT maker OpenAI says it’s rolling out new policies and tools meant to combat misinformation and abuse ahead of 2024 elections worldwide. 2024 is one of the biggest election years in history — with high-stakes races in over 50 countries globally. It’s also the first major election cycle where generative AI tools will be widely available to voters, governments and political campaigns. In a statement published Monday, OpenAI said it will lean into verified news and image authenticity programs to ensure users get access to high-quality information throughout elections.

The company will add digital credentials set by a third-party coalition of AI firms that encode details about the origin of images created using its image generator tool, DALL-E 3. The firm says it’s experimenting with a new “provenance classifier” tool that can detect AI-generated images that have been made using DALL-E. It hopes to make that tool available to its first group of testers, including journalists, researchers and other tech platforms, for feedback. OpenAI will continue integrating its ChatGPT platform with real-time news reporting globally, “including attribution and links,” it said. That effort builds on a first-of-its-kind deal announced with German media giant Axel Springer last year that offers ChatGPT users summaries of select global news content from the company’s outlets. In the U.S., OpenAI says it’s working with the nonpartisan National Association of Secretaries of State to direct ChatGPT users to CanIVote.org for authoritative information on U.S. voting and elections.

Because generative AI technology is so new, OpenAI says it’s still working to understand how effective its tools might be for political persuasion. To hedge against abuse, the firm doesn’t allow people to build applications for political campaigning and lobbying, and it doesn’t allow engineers to create chatbots that pretend to be real people, such as candidates. Like most tech firms, it doesn’t allow users to build applications for its ChatGPT platform that deter people from participating in the democratic process.

Read more of this story at Slashdot.

GTA 5 Actor Goes Nuclear On AI Company That Made Voice Chatbot of Him

Rich Stanton reports via PC Gamer: Ned Luke, the actor whose roles include GTA 5’s Michael De Santa, has gone off on an AI company that released an unlicensed voice chatbot based on the character, and succeeded in having the offending bot nuked from the internet. AI company WAME had tweeted a link to its Michael chatbot on January 14 along with the text: “Any GTA fans around here? Now take your gaming experience to another level. Try having a realistic voice conversation with Michael De Santa, the protagonist of GTA 5, right now!”

Unfortunately for WAME, it quickly attracted the attention of Luke, who does not seem like the type of man to hold back. “This is fucking bullshit WAME,” says Luke (though I am definitely hearing all this in Michael’s voice). “Absolutely nothing cool about ripping people off with some lame computer estimation of my voice. Don’t waste your time on this garbage.” Luke also tagged Rockstar Games and the SAG-AFTRA union, since when the chatbot and tweets promoting it have been deleted. Fellow actors including Roger Clark weighed in with sympathy about how much this kind of stuff sucks, and our boy wasn’t done by a long shot:

“I’m not worried about being replaced, Roger,” says Luke. “I just hate these fuckers, and am pissed as fuck that our shitty union is so damn weak that this will soon be an issue on legit work, not just some lame douchebag tryna make $$ off of our voices.” Luke is here referring to a recent SAG-AFTRA “ethical AI” agreement which has gone down with some of its members like a cup of cold sick. Musician Marilou A. Burnel (not affiliated with WAME) pops up to suggest that “creative people make remarkable things with AI, and the opposite is also true.” “Not using my voice they don’t,” says Luke. WAME issued a statement expressing their “profound understanding and concern.”

“This incident has highlighted the intricate interplay between the advancement of AI technology and the ethical and legal realms,” says WAME. “WAME commits to protecting the rights of voice actors and creators while advancing ethical AI practices.”

Read more of this story at Slashdot.

Bill Gates Interviews Sam Altman, Who Predicts Fastest Tech Revolution ‘By Far’

This week on his podcast Bill Gates asked Sam Altman how his team is doing after his (temporary) ouster, Altman replies “a lot of people have remarked on the fact that the team has never felt more productive or more optimistic or better. So, I guess that’s like a silver lining of all of this. In some sense, this was like a real moment of growing up for us, we are very motivated to become better, and sort of to become a company ready for the challenges in front of us.”

The rest of their conversation was pre-ouster — but gave fascinating glimpses at the possible future of AI — including the prospect of very speedy improvements. Altman suggests it will be easier to understand how a creative work gets “encoded” in an AI than it would be in a human brain. “There has been some very good work on interpretability, and I think there will be more over time… The little bits we do understand have, as you’d expect, been very helpful in improving these things. We’re all motivated to really understand them, scientific curiosity aside, but the scale of these is so vast….”

BILL GATES: I’m pretty sure, within the next five years, we’ll understand it. In terms of both training efficiency and accuracy, that understanding would let us do far better than we’re able to do today.
SAM ALTMAN: A hundred percent. You see this in a lot of the history of technology where someone makes an empirical discovery. They have no idea what’s going on, but it clearly works. Then, as the scientific understanding deepens, they can make it so much better.

BILL GATES: Yes, in physics, biology, it’s sometimes just messing around, and it’s like, whoa — how does this actually come together…? When you look at the next two years, what do you think some of the key milestones will be?

SAM ALTMAN: Multimodality will definitely be important.
BILL GATES: Which means speech in, speech out?
SAM ALTMAN: Speech in, speech out. Images. Eventually video. Clearly, people really want that…. [B]ut maybe the most important areas of progress will be around reasoning ability. Right now, GPT-4 can reason in only extremely limited ways. Also reliability. If you ask GPT-4 most questions 10,000 times, one of those 10,000 is probably pretty good, but it doesn’t always know which one, and you’d like to get the best response of 10,000 each time, and so that increase in reliability will be important.

Customizability and personalization will also be very important. People want very different things out of GPT-4: different styles, different sets of assumptions. We’ll make all that possible, and then also the ability to have it use your own data. The ability to know about you, your email, your calendar, how you like appointments booked, connected to other outside data sources, all of that. Those will be some of the most important areas of improvement.
Areas where Altman sees potential are healthcare, education, and especially computer programming. “If you make a programmer three times more effective, it’s not just that they can do three times more stuff, it’s that they can — at that higher level of abstraction, using more of their brainpower — they can now think of totally different things. It’s like, going from punch cards to higher level languages didn’t just let us program a little faster — it let us do these qualitatively new things. And we’re really seeing that…

“I think it’s worth always putting it in context of this technology that, at least for the next five or ten years, will be on a very steep improvement curve. These are the stupidest the models will ever be.”

He predicts the fastest technology revolution “by far,” worrying about “the speed with which society is going to have to adapt, and that the labor market will change.” But soon he adds that “We started investing a little bit in robotics companies. On the physical hardware side, there’s finally, for the first time that I’ve ever seen, really exciting new platforms being built there.”

And at some point Altman tells Gates he’s optimistic that AI could contribute to helping humans get along with each other.

Read more of this story at Slashdot.

Cloudflare CTO Predicts Coding AIs Will Bring More Productivity, Urges ‘Data Fluidity’

Serverless JavaScript is hosted in an edge network or by an HTTP caching service (and only runs when requested), explains Cloudflare. “Developers can write and deploy JavaScript functions that process HTTP requests before they travel all the way to the origin server.”

Their platform for serverless JavaScript will soon have built-in AI features, Cloudflare’s CTO announced today, “so that developers have a rich toolset at their disposal.
A developer platform without AI isn’t going to be much use. It’ll be a bit like a developer platform that can’t do floating point arithmetic, or handle a list of data. We’re going to see every developer platform have AI capability built in because these capabilities will allow developers to make richer experiences for users…

As I look back at 40 years of my programming life, I haven’t been this excited about a new technology… ever. That’s because AI is going to be a pervasive change to how programs get written, who writes programs and how all of us interact with software… I think it’ll make us more productive and make more people programmers.

But in addition, developers on the platform will also be able to train and upload their own models to run on Cloudflare’s global network:
Unlike a database where data might largely be stored and accessed infrequently, AI systems are alive with moving data. To accommodate that, platforms need to stop treating data as something to lock in developers with. Data needs to be free to move from system to system, from platform to platform, without transfer fees, egress or other nonsense. If we want a world of AI, we need a world of data fluidity.

Read more of this story at Slashdot.