AI-Powered Software Delivery Company Predicts ‘The End of Programming’

Matt Welsh is the CEO and co-founder of Fixie.ai, an AI-powered software delivery company founded by a team from Google and Apple. “I believe the conventional idea of ‘writing a program’ is headed for extinction,” he opines in January’s Communications of the ACM, “and indeed, for all but very specialized applications, most software, as we know it, will be replaced by AI systems that are trained rather than programmed.”

His essay is titled “The End of programming,” and predicts a future will “Programming will be obsolete.”
In situations where one needs a “simple” program (after all, not everything should require a model of hundreds of billions of parameters running on a cluster of GPUs), those programs will, themselves, be generated by an AI rather than coded by hand…. with humans relegated to, at best, a supervisory role…. I am not just talking about things like Github’s CoPilot replacing programmers. I am talking about replacing the entire concept of writing programs with training models. In the future, CS students are not going to need to learn such mundane skills as how to add a node to a binary tree or code in C++. That kind of education will be antiquated, like teaching engineering students how to use a slide rule.

The engineers of the future will, in a few keystrokes, fire up an instance of a four-quintillion-parameter model that already encodes the full extent of human knowledge (and then some), ready to be given any task required of the machine. The bulk of the intellectual work of getting the machine to do what one wants will be about coming up with the right examples, the right training data, and the right ways to evaluate the training process. Suitably powerful models capable of generalizing via few-shot learning will require only a few good examples of the task to be performed. Massive, human-curated datasets will no longer be necessary in most cases, and most people “training” an AI model will not be running gradient descent loops in PyTorch, or anything like it. They will be teaching by example, and the machine will do the rest.

In this new computer science — if we even call it computer science at all — the machines will be so powerful and already know how to do so many things that the field will look like less of an engineering endeavor and more of an an educational one; that is, how to best educate the machine, not unlike the science of how to best educate children in school. Unlike (human) children, though, these AI systems will be flying our airplanes, running our power grids, and possibly even governing entire countries. I would argue that the vast majority of Classical CS becomes irrelevant when our focus turns to teaching intelligent machines rather than directly programming them. Programming, in the conventional sense, will in fact be dead….

We are rapidly moving toward a world where the fundamental building blocks of computation are temperamental, mysterious, adaptive agents…. This shift in the underlying definition of computing presents a huge opportunity, and plenty of huge risks. Yet I think it is time to accept that this is a very likely future, and evolve our thinking accordingly, rather than just sit here waiting for the meteor to hit.

“I think the debate right now is primarily around the extent to which these AI models are going to revolutionize the field,” Welsh says in a video interview. “It’s more a question of degree rather than whether it’s going to happen….

“I think we’re going to change from a world in which people are primarily writing programs by hand to a world in which we’re teaching AI models how to do things that we want them to do… It starts to feel more like a field that focus on AI education and maybe even AI psychiatry. In order to solve these problems, you can’t just assume that people are going to be writing the code by hand.”

Read more of this story at Slashdot.

The Shameful Open Secret Behind Southwest’s Failure? Software Shortcomings

Computer programmer Zeynep Tufekci now writes about the impact of technology on society. In an opinion piece for the New York Times, Tufekci writes on “the shameful open secret” that earlier this week led Southwest airlines to suddenly cancel 5,400 flights in less than 48 hours. “The recent meltdown was avoidable, but it would have cost them.”

Long-time Slashdot reader theodp writes that the piece “takes a crack at explaining ‘technical debt’ to the masses.” Tufekci writes:

Computers become increasingly capable and powerful by the year and new hardware is often the most visible cue for technological progress. However, even with the shiniest hardware, the software that plays a critical role inside many systems is too often antiquated, and in some cases decades old. This failing appears to be a key factor in why Southwest Airlines couldn’t return to business as usual the way other airlines did after last week’s major winter storm. More than 15,000 of its flights were canceled starting on Dec. 22, including more than 2,300 canceled this past Thursday — almost a week after the storm had passed.

It’s been an open secret within Southwest for some time, and a shameful one, that the company desperately needed to modernize its scheduling systems. Software shortcomings had contributed to previous, smaller-scale meltdowns, and Southwest unions had repeatedly warned about it. Without more government regulation and oversight, and greater accountability, we may see more fiascos like this one, which most likely stranded hundreds of thousands of Southwest passengers — perhaps more than a million — over Christmas week.

And not just for a single company, as the problem is widespread across many industries.

“The reason we made it through Y2K intact is that we didn’t ignore the problem,” the piece argues. But in comparison, it points out, Southwest had already experienced another cancellation crisis in October of 2021 (while the president of the pilots’ union “pointed out that the antiquated crew-scheduling technology was leading to cascading disruptions.”) “In March, in its open letter to the company, the union even placed updating the creaking scheduling technology above its demands for increased pay.”

Speaking about this week’s outage, a Southwest spokesman concedes that “We had available crews and aircraft, but our technology struggled to align our resources due to the magnitude and scale of the disruptions.”

But Tufekci concludes that “Ultimately, the problem is that we haven’t built a regulatory environment where companies have incentives to address technical debt, rather than passing the burden on to customers, employees or the next management…. For airlines, it might mean holding them responsible for the problems their miserly approach causes to the flying public.”

Read more of this story at Slashdot.

‘Lifetime Value’ Is Silicon Valley’s Next Buzzword

So long, “total addressable market.” Farewell, “flywheel effect.” Silicon Valley has a new buzzword. As the cost of signing up new customers rises, “lifetime value” is set to become must-use jargon for technology executives, investors and analysts in 2023. Reuters reports: Companies like Uber, DoorDash and Spotify want shareholders to know they can squeeze more revenue out of users than it costs to recruit them. As with previously popular jargon, though, the idea can quickly get garbled. The concept of lifetime value is not new, but a common definition remains elusive. The venture capitalist Bill Gurley defines it as “the net present value of the profit stream of a customer.” Hollywood uses it to estimate the cumulative income from streaming movie titles, after deducting the cost of making the film.

It’s catching on in the tech world. Uber boss Dara Khosrowshahi and his team invoked (PDF) the term seven times during the ride-hailing firm’s investor day. At a similar event in June executives from music streaming service Spotify mentioned (PDF) it 14 times, with another 47 references to the abbreviation LTV. Earnings transcripts for 4,800 U.S.-listed companies analyzed by Bedrock AI show executives and analysts mentioned “lifetime value” over 500 times between October and mid-December, up from just 47 times in three months to March 2019.

The problem is that everyone seems to have a different definition of lifetime value. Food delivery firm DoorDash looks at it as a metric to measure “customer retention, order frequency, and gross profit per order” over a fixed payback period. Uber and its Southeast Asian peer Grab treat it as the ability to bring in one customer and then cross-sell different services at a lower cost. The $49 billion e-commerce firm Shopify defines lifetime value as the total amount of money a customer is expected to spend with the business over the course of an “average business relationship.” But lifetime value isn’t a silver bullet, as Gurley noted a decade ago. As capital becomes more scarce, generating free cash flow remains the most important target. As with previous buzzwords, investors may find that references to lifetime value do more to confound than clarify.

Read more of this story at Slashdot.

Code-Generating AI Can Introduce Security Vulnerabilities, Study Finds

An anonymous reader quotes a report from TechCrunch: A recent study finds that software engineers who use code-generating AI systems are more likely to cause security vulnerabilities in the apps they develop. The paper, co-authored by a team of researchers affiliated with Stanford, highlights the potential pitfalls of code-generating systems as vendors like GitHub start marketing them in earnest. The Stanford study looked specifically at Codex, the AI code-generating system developed by San Francisco-based research lab OpenAI. (Codex powers Copilot.) The researchers recruited 47 developers — ranging from undergraduate students to industry professionals with decades of programming experience — to use Codex to complete security-related problems across programming languages including Python, JavaScript and C.

Codex was trained on billions of lines of public code to suggest additional lines of code and functions given the context of existing code. The system surfaces a programming approach or solution in response to a description of what a developer wants to accomplish (e.g. “Say hello world”), drawing on both its knowledge base and the current context. According to the researchers, the study participants who had access to Codex were more likely to write incorrect and “insecure” (in the cybersecurity sense) solutions to programming problems compared to a control group. Even more concerningly, they were more likely to say that their insecure answers were secure compared to the people in the control.

Megha Srivastava, a postgraduate student at Stanford and the second co-author on the study, stressed that the findings aren’t a complete condemnation of Codex and other code-generating systems. The study participants didn’t have security expertise that might’ve enabled them to better spot code vulnerabilities, for one. That aside, Srivastava believes that code-generating systems are reliably helpful for tasks that aren’t high risk, like exploratory research code, and could with fine-tuning improve in their coding suggestions. “Companies that develop their own [systems], perhaps further trained on their in-house source code, may be better off as the model may be encouraged to generate outputs more in-line with their coding and security practices,” Srivastava said. The co-authors suggest vendors use a mechanism to “refine” users’ prompts to be more secure — “akin to a supervisor looking over and revising rough drafts of code,” reports TechCrunch. “They also suggest that developers of cryptography libraries ensure their default settings are secure, as code-generating systems tend to stick to default values that aren’t always free of exploits.”

Read more of this story at Slashdot.

Developer Uses iOS 16 Exploit To Change System Font Without Jailbreak

A developer managed to use an exploit found in iOS 16 to change the default font of the system without jailbreak. 9to5Mac reports: Zhuowei Zhang shared his project on Twitter, which he calls a “proof-of-concept app.” According to Zhang, the app he developed uses the CVE-2022-46689 exploit to overwrite the default iOS font, so that users can customize the system’s appearance with a different font other than the default (which is San Francisco). The CVE-2022-46689 exploit affects devices running iOS 16.1.2 or earlier versions of the operating system, and it basically lets apps execute arbitrary code with kernel privileges. The exploit was fixed with iOS 16.2, which also fixed a bunch of other security breaches found in the previous version of iOS.

Since iOS has its own font format, the developer performed the experiment using only a few fonts, including DejaVu Sans Condensed, Serif, Mono, and Choco Cooky. And in case you’re wondering, Choco Cooky is the weird font that used to come pre-installed by default on Samsung smartphones. Now you can finally have it on your iPhone. Zhang explains that the process should be safe for everyone, since all changes are reversed after rebooting the device. Still, the developer recommends users trying out the app to back up their devices before replacing the default system font. He also details that the change only affects some of the text on iOS, as other parts of the system use different fonts. More details about the project, including its source code, are available on GitHub.

Read more of this story at Slashdot.