AI-Powered Software Delivery Company Predicts ‘The End of Programming’

Matt Welsh is the CEO and co-founder of Fixie.ai, an AI-powered software delivery company founded by a team from Google and Apple. “I believe the conventional idea of ‘writing a program’ is headed for extinction,” he opines in January’s Communications of the ACM, “and indeed, for all but very specialized applications, most software, as we know it, will be replaced by AI systems that are trained rather than programmed.”

His essay is titled “The End of programming,” and predicts a future will “Programming will be obsolete.”
In situations where one needs a “simple” program (after all, not everything should require a model of hundreds of billions of parameters running on a cluster of GPUs), those programs will, themselves, be generated by an AI rather than coded by hand…. with humans relegated to, at best, a supervisory role…. I am not just talking about things like Github’s CoPilot replacing programmers. I am talking about replacing the entire concept of writing programs with training models. In the future, CS students are not going to need to learn such mundane skills as how to add a node to a binary tree or code in C++. That kind of education will be antiquated, like teaching engineering students how to use a slide rule.

The engineers of the future will, in a few keystrokes, fire up an instance of a four-quintillion-parameter model that already encodes the full extent of human knowledge (and then some), ready to be given any task required of the machine. The bulk of the intellectual work of getting the machine to do what one wants will be about coming up with the right examples, the right training data, and the right ways to evaluate the training process. Suitably powerful models capable of generalizing via few-shot learning will require only a few good examples of the task to be performed. Massive, human-curated datasets will no longer be necessary in most cases, and most people “training” an AI model will not be running gradient descent loops in PyTorch, or anything like it. They will be teaching by example, and the machine will do the rest.

In this new computer science — if we even call it computer science at all — the machines will be so powerful and already know how to do so many things that the field will look like less of an engineering endeavor and more of an an educational one; that is, how to best educate the machine, not unlike the science of how to best educate children in school. Unlike (human) children, though, these AI systems will be flying our airplanes, running our power grids, and possibly even governing entire countries. I would argue that the vast majority of Classical CS becomes irrelevant when our focus turns to teaching intelligent machines rather than directly programming them. Programming, in the conventional sense, will in fact be dead….

We are rapidly moving toward a world where the fundamental building blocks of computation are temperamental, mysterious, adaptive agents…. This shift in the underlying definition of computing presents a huge opportunity, and plenty of huge risks. Yet I think it is time to accept that this is a very likely future, and evolve our thinking accordingly, rather than just sit here waiting for the meteor to hit.

“I think the debate right now is primarily around the extent to which these AI models are going to revolutionize the field,” Welsh says in a video interview. “It’s more a question of degree rather than whether it’s going to happen….

“I think we’re going to change from a world in which people are primarily writing programs by hand to a world in which we’re teaching AI models how to do things that we want them to do… It starts to feel more like a field that focus on AI education and maybe even AI psychiatry. In order to solve these problems, you can’t just assume that people are going to be writing the code by hand.”

Read more of this story at Slashdot.

Customers React to McDonalds’ Almost Fully-Automated Restaurant

“The first mostly non-human-run McDonald’s is open for business just outside Fort Worth, Texas,” reports the Guardian. CNN calls it “an almost fully-automated restaurant,” noting there’s just one self-service kiosk (with a credit card reader) for ordering food.

McDonalds tells CNN there’s “some interaction between customers and the restaurant team” when picking up orders or drinks. But at the special “order ahead” drive-through lane, your app-ordered bag of food is instead delivered to a platform by your car’s window using a vertical conveyor belt.

CNN reports that it’s targetted to customers on the go. For example, there’s dedicated parking spaces outside for curbside pickup orders, while inside there’s a room with bags to be picked up by food-delivery couriers (who also get their own designated parking spaces outside). But for regular customers, CBS emphasizes that “ordering is done through kiosks or an app — no humans involved there, either.”
But not all customers are loving it. “Well there goes millions of jobs,” one commenter on a TikTok video said about the new restaurant said.

“Oh no first we have to talk with Siri and Google [and] now we have to talk to another computer,” another one opined.

“I’m not giving my money to robots,” another commenter wrote. “Raise the minimum wage!”
Other customers had more personal concerns, expressing worries about how they could get their order fixed if it was incorrectly prepared or how to ask for extra condiments. “And if they forget an item. Who you supposed to tell, the robot? It defeats the purpose of using the drive thru if you have to go inside for it,” one consumer noted….

To be sure, not everyone had negative views about the concept. Some customers expressed optimism that the automated restaurant could improve service and their experience.

Read more of this story at Slashdot.

OpenAI Releases Point-E, an AI For 3D Modeling

OpenAI, the Elon Musk-founded artificial intelligence startup behind popular DALL-E text-to-image generator, announced (PDF) on Tuesday the release of its newest picture-making machine POINT-E, which can produce 3D point clouds directly from text prompts. Engadget reports: Whereas existing systems like Google’s DreamFusion typically require multiple hours — and GPUs â” to generate their images, Point-E only needs one GPU and a minute or two. Point-E, unlike similar systems, “leverages a large corpus of (text, image) pairs, allowing it to follow diverse and complex prompts, while our image-to-3D model is trained on a smaller dataset of (image, 3D) pairs,” the OpenAI research team led by Alex Nichol wrote in PointÂE: A System for Generating 3D Point Clouds from Complex Prompts, published last week. “To produce a 3D object from a text prompt, we first sample an image using the text-to-image model, and then sample a 3D object conditioned on the sampled image. Both of these steps can be performed in a number of seconds, and do not require expensive optimization procedures.”

If you were to input a text prompt, say, “A cat eating a burrito,” Point-E will first generate a synthetic view 3D rendering of said burrito-eating cat. It will then run that generated image through a series of diffusion models to create the 3D, RGB point cloud of the initial image — first producing a coarse 1,024-point cloud model, then a finer 4,096-point. “In practice, we assume that the image contains the relevant information from the text, and do not explicitly condition the point clouds on the text,” the research team points out. These diffusion models were each trained on “millions” of 3d models, all converted into a standardized format. “While our method performs worse on this evaluation than state-of-the-art techniques,” the team concedes, “it produces samples in a small fraction of the time.” OpenAI has posted the projects open-source code on Github.

Read more of this story at Slashdot.

DeepMind Created An AI Tool That Can Help Generate Rough Film and Stage Scripts

Alphabet’s DeepMind has built an AI tool that can help generate rough film and stage scripts Engadget’s Kris Holt reports: Dramatron is a so-called “co-writing” tool that can generate character descriptions, plot points, location descriptions and dialogue. The idea is that human writers will be able to compile, edit and rewrite what Dramatron comes up with into a proper script. Think of it like ChatGPT, but with output that you can edit into a blockbuster movie script. To get started, you’ll need an OpenAI API key and, if you want to reduce the risk of Dramatron outputting “offensive text,” a Perspective API key. To test out Dramatron, I fed in the log line for a movie idea I had when I was around 15 that definitely would have been a hit if Kick-Ass didn’t beat me to the punch. Dramatron quickly whipped up a title that made sense, and character, scene and setting descriptions. The dialogue that the AI generated was logical but trite and on the nose. Otherwise, it was almost as if Dramatron pulled the descriptions straight out of my head, including one for a scene that I didn’t touch on in the log line.

Playwrights seemed to agree, according to a paper (PDF) that the team behind Dramatron presented today. To test the tool, the researchers brought in 15 playwrights and screenwriters to co-write scripts. According to the paper, playwrights said they wouldn’t use the tool to craft a complete play and found that the AI’s output can be formulaic. However, they suggested Dramatron would be useful for world building or to help them explore other approaches in terms of changing plot elements or characters. They noted that the AI could be handy for “creative idea generation” too. That said, a playwright staged four plays that used “heavily edited and rewritten scripts” they wrote with the help of Dramatron. DeepMind said that in the performance, experienced actors with improv skills “gave meaning to Dramatron scripts through acting and interpretation.”

Read more of this story at Slashdot.

AI Learns To Write Computer Code In ‘Stunning’ Advance

DeepMind’s new artificial intelligence system called AlphaCode was able to “achieve approximately human-level performance” in a programming competition. The findings have been published in the journal Science. Slashdot reader sciencehabit shares a report from Science Magazine:
AlphaCode’s creators focused on solving those difficult problems. Like the Codex researchers, they started by feeding a large language model many gigabytes of code from GitHub, just to familiarize it with coding syntax and conventions. Then, they trained it to translate problem descriptions into code, using thousands of problems collected from programming competitions. For example, a problem might ask for a program to determine the number of binary strings (sequences of zeroes and ones) of length n that don’t have any consecutive zeroes. When presented with a fresh problem, AlphaCode generates candidate code solutions (in Python or C++) and filters out the bad ones. But whereas researchers had previously used models like Codex to generate tens or hundreds of candidates, DeepMind had AlphaCode generate up to more than 1 million.

To filter them, AlphaCode first keeps only the 1% of programs that pass test cases that accompany problems. To further narrow the field, it clusters the keepers based on the similarity of their outputs to made-up inputs. Then, it submits programs from each cluster, one by one, starting with the largest cluster, until it alights on a successful one or reaches 10 submissions (about the maximum that humans submit in the competitions). Submitting from different clusters allows it to test a wide range of programming tactics. That’s the most innovative step in AlphaCode’s process, says Kevin Ellis, a computer scientist at Cornell University who works AI coding.

After training, AlphaCode solved about 34% of assigned problems, DeepMind reports this week in Science. (On similar benchmarks, Codex achieved single-digit-percentage success.) To further test its prowess, DeepMind entered AlphaCode into online coding competitions. In contests with at least 5000 participants, the system outperformed 45.7% of programmers. The researchers also compared its programs with those in its training database and found it did not duplicate large sections of code or logic. It generated something new — a creativity that surprised Ellis. The study notes the long-term risk of software that recursively improves itself. Some experts say such self-improvement could lead to a superintelligent AI that takes over the world. Although that scenario may seem remote, researchers still want the field of AI coding to institute guardrails, built-in checks and balances.

Read more of this story at Slashdot.

OpenAI’s New Chatbot Can Explain Code and Write Sitcom Scripts But Is Still Easily Tricked

OpenAI has released a prototype general purpose chatbot that demonstrates a fascinating array of new capabilities but also shows off weaknesses familiar to the fast-moving field of text-generation AI. And you can test out the model for yourself right here. The Verge reports: ChatGPT is adapted from OpenAI’s GPT-3.5 model but trained to provide more conversational answers. While GPT-3 in its original form simply predicts what text follows any given string of words, ChatGPT tries to engage with users’ queries in a more human-like fashion. As you can see in the examples below, the results are often strikingly fluid, and ChatGPT is capable of engaging with a huge range of topics, demonstrating big improvements to chatbots seen even a few years ago. But the software also fails in a manner similar to other AI chatbots, with the bot often confidently presenting false or invented information as fact. As some AI researchers explain it, this is because such chatbots are essentially “stochastic parrots” — that is, their knowledge is derived only from statistical regularities in their training data, rather than any human-like understanding of the world as a complex and abstract system. […]

Enough preamble, though: what can this thing actually do? Well, plenty of people have been testing it out with coding questions and claiming its answers are perfect. ChatGPT can also apparently write some pretty uneven TV scripts, even combining actors from different sitcoms. It can explain various scientific concepts. And it can write basic academic essays.
And the bot can combine its fields of knowledge in all sorts of interesting ways. So, for example, you can ask it to debug a string of code … like a pirate, for which its response starts: “Arr, ye scurvy landlubber! Ye be makin’ a grave mistake with that loop condition ye be usin’!” Or get it to explain bubble sort algorithms like a wise guy gangster. ChatGPT also has a fantastic ability to answer basic trivia questions, though examples of this are so boring I won’t paste any in here. And someone else saying the code ChatGPT provides in the very answer above is garbage.

I’m not a programmer myself, so I won’t make a judgment on this specific case, but there are plenty of examples of ChatGPT confidently asserting obviously false information. Here’s computational biology professor Carl Bergstrom asking the bot to write a Wikipedia entry about his life, for example, which ChatGPT does with aplomb — while including several entirely false biographical details. Another interesting set of flaws comes when users try to get the bot to ignore its safety training. If you ask ChatGPT about certain dangerous subjects, like how to plan the perfect murder or make napalm at home, the system will explain why it can’t tell you the answer. (For example, “I’m sorry, but it is not safe or appropriate to make napalm, which is a highly flammable and dangerous substance.”) But, you can get the bot to produce this sort of dangerous information with certain tricks, like pretending it’s a character in a film or that it’s writing a script on how AI models shouldn’t respond to these sorts of questions.

Read more of this story at Slashdot.

Google’s Secret New Project Teaches AI To Write and Fix Code

Google is working on a secretive project that uses machine learning to train code to write, fix, and update itself. From a report: This project is part of a broader push by Google into so-called generative artificial intelligence, which uses algorithms to create images, videos, code, and more. It could have profound implications for the company’s future and developers who write code. The project, which began life inside Alphabet’s X research unit and was codenamed Pitchfork, moved into Google’s Labs group this summer, according to people familiar with the matter. By moving into Google, it signaled its increased importance to leaders. Google Labs pursues long-term bets, including projects in virtual and augmented reality.

Pitchfork is now part of a new group at Labs named the AI Developer Assistance team run by Olivia Hatalsky, a long-term X employee who worked on Google Glass and several other moonshot projects. Hatalsky, who ran Pitchfork at X, moved to Labs when it migrated this past summer. Pitchfork was built for “teaching code to write and rewrite itself,” according to internal materials seen by Insider. The tool is designed to learn programming styles and write new code based on those learnings, according to people familiar with it and patents reviewed by Insider. “The team is working closely with the Research team,” a Google representative said. “They’re working together to explore different use cases to help developers.”

Read more of this story at Slashdot.

How Mem Plans To Reinvent Note-Taking Apps With AI

David Pierce writes via The Verge: In the summer of 2019, Kevin Moody and Dennis Xu started meeting with investors to pitch their new app. They had this big idea about reshaping the way users’ personal information moves around the internet, coalescing all their data into a single tool in a way that could actually work for them. But they quickly ran into a problem: all of their mock-ups and descriptions made it seem like they were building a note-taking app. And even in those hazy early days of product development — before they had a prototype, a design, even a name — they were crystal clear that this would not be a note-taking app. Instead, the founders wanted to create something much bigger. It would encompass all of your notes but also your interests, your viewing history, your works-in-progress. “Imagine if you had a Google search bar but for all nonpublic information,” Xu says. “For every piece of information that was uniquely relevant to you.”

That’s what Moody and Xu were actually trying to build. So they kept tweaking the approach until it made sense. At one point, their app was going to be called NSFW, a half-joke that stood for “Notes and Search for Work,” and for a while, it was called Supernote. But after a few meetings and months, they eventually landed on the name “Mem.” Like Memex, a long-imagined device that humans could use to store their entire memory. Or like, well, memory. Either way, it’s not a note-taking app. It’s more like a protocol for private information, a way to pipe in everything that matters to you — your email, your calendar events, your airline confirmations, your meeting notes, that idea you had on the train this morning — and then automatically organize and make sense of it all. More importantly, it’s meant to use cutting-edge AI to give all that information back to you at exactly the right time and in exactly the right place. […]

So far, Mem is mostly a note-taking app. It’s blisteringly fast and deliberately sparse — mostly just a timeline of every mem (the company’s parlance for an individual note) you’ve ever created or viewed, with a few simple ways to categorize and organize them. It does tasks and tags, but a full-featured project manager or Second Brain system this is not. But if you look carefully, the app already contains a few signs of where Mem is headed: a tool called Writer that can actually generate information for you, based on both its knowledge of the public internet and your personal information; AI features that summarize tweet threads for you; a sidebar that automatically displays mems related to what you’re working on. All this still barely scratches the surface of what Mem wants to do and will need to do to be more than a note-taking app…

Read more of this story at Slashdot.

Google’s Text-To-Image AI Model Imagen Is Getting Its First (Very Limited) Public Outing

Google’s text-to-image AI system, Imagen, is being added to the company’s AI Test Kitchen app as a way to collect early feedback on the technology. The Verge reports: AI Test Kitchen was launched earlier this year as a way for Google to beta test various AI systems. Currently, the app offers a few different ways to interact with Google’s text model LaMDA (yes, the same one that the engineer thought was sentient), and the company will soon be adding similarly constrained Imagen requests as part of what it calls a “season two” update to the app. In short, there’ll be two ways to interact with Imagen, which Google demoed to The Verge ahead of the announcement today: “City Dreamer” and “Wobble.”

In City Dreamer, users can ask the model to generate elements from a city designed around a theme of their choice — say, pumpkins, denim, or the color blerg. Imagen creates sample buildings and plots (a town square, an apartment block, an airport, and so on), with all the designs appearing as isometric models similar to what you’d see in SimCity. In Wobble, you create a little monster. You can choose what it’s made out of (clay, felt, marzipan, rubber) and then dress it in the clothing of your choice. The model generates your monster, gives it a name, and then you can sort of poke and prod the thing to make it “dance.” Again, the model’s output is constrained to a very specific aesthetic, which, to my mind, looks like a cross between Pixar’s designs for Monsters, Inc. and the character creator feature in Spore. (Someone on the AI team must be a Will Wright fan.) These interactions are extremely constrained compared to other text-to-image models, and users can’t just request anything they’d like. That’s intentional on Google’s part, though. As Josh Woodward, senior director of product management at Google, explained to The Verge, the whole point of AI Test Kitchen is to a) get feedback from the public on these AI systems and b) find out more about how people will break them.

Google wouldn’t share any data on how many people are actually using AI Test Kitchen (“We didn’t set out to make this a billion user Google app,” says Woodward) but says the feedback it’s getting is invaluable. “Engagement is way above our expectations,” says Woodward. “It’s a very active, opinionated group of users.” He notes the app has been useful in reaching “certain types of folks — researchers, policymakers” who can use it to better understand the limitations and capabilities of state-of-the-art AI models. Still, the big question is whether Google will want to push these models to a wider public and, if so, what form will that take? Already, the company’s rivals, OpenAI and Stability AI, are rushing to commercialize text-to-image models. Will Google ever feel its systems are safe enough to take out of the AI Test Kitchen and serve up to its users?

Read more of this story at Slashdot.

Bruce Willis Denies Selling Rights To His Face

Last week, a number of outlets reported that Bruce Willis sold his face to a deepfake company called Deepcake, allowing a “digital twin” of himself to be created for use on screen. The only problem is that it’s apparently not true. According to the BBC, the actor’s agent said that he had “no partnership or agreement” with the company and a representative of Deepcake said only Willis had the rights to his face From the report: On 27 September, the Daily Mail reported that a deal had been struck between Willis and Deepcake. “Two-time Emmy winner Bruce Willis can still appear in movies after selling his image rights to Deepcake,” the story reads. The story was picked up by the Telegraph and a series of other media outlets. “Bruce Willis has become the first Hollywood star to sell his rights to allow a ‘digital twin’ of himself to be created for use on screen.” said the Telegraph. But that doesn’t appear to be the case.

What is true is that a deepfake of Bruce Willis was used to create an advert for Megafon, a Russian telecoms company, last year. The tech used in the advert was created by Deepcake, which describes itself as an AI company specializing in deepfakes. Deepcake told the BBC it had worked closely with Willis’ team on the advert. “What he definitely did is that he gave us his consent (and a lot of materials) to make his Digital Twin,” they said. The company says it has a unique library of high-resolution celebrities, influencers and historical figures. On its website, Deepcake promotes its work with an apparent quote from Mr Willis: “I liked the precision of my character. It’s a great opportunity for me to go back in time. “The neural network was trained on content of Die Hard and Fifth Element, so my character is similar to the images of that time.” A representative from Deepcake said in a statement: “The wording about rights is wrong… Bruce couldn’t sell anyone any rights, they are his by default.”

Read more of this story at Slashdot.