Remembering Unix Desktops – and What We Can Learn From Them

“As important as its historically underhanded business dealings were for its success, Microsoft didn’t have to cheat to win,” argues a new article in the Register.

“The Unix companies were doing a great job of killing themselves off.”

You see, while there were many attempts to create software development standards for Unix, they were too general to do much good — for example Portable Operating System Interface (POSIX) — or they became mired in the business consortium fights between the Open Systems Foundation and Unix International, which became known as the Unix wars.

While the Unix companies were busy ripping each other to shreds, Microsoft was smiling all the way to the bank. The core problem was that the Unix companies couldn’t settle on software standards. Independent Software Vendors (ISV) had to write applications for each Unix platform. Each of these had only a minute desktop market share. It simply made no business sense for programmers to write one version of an application for SCO OpenDesktop (also known as OpenDeathtrap), another for NeXTStep, and still another one for SunOS. Does that sound familiar? That kind of thing is still a problem for the Linux desktop, and it’s why I’m a big fan of Linux containerized desktop applications, such as Red Hat’s Flatpak and Canonical’s Snap.

By the time the two sides finally made peace by joining forces in The Open Group in 1996, it was too late. Unix was crowded out on the conventional desktop, and the workstation became pretty much a Sun Microsystems-only play.
Linux’s GPL license created an “enforced” consortia that allowed it to take over, according to the article — and with Linus Torvalds as Linux’s single leader, “it avoided the old Unix trap of in-fighting…

I’ve been to many Linux Plumbers meetings. There, I’ve seen him and the top Linux kernel developers work with each other without any drama. Today’s Linux is a group effort… The Linux distributors and developers have learned their Unix history lessons. They’ve realized that it takes more than open source; it takes open standards and consensus to make a successful desktop operating system.
And the article also points out that one of those early Unix desktops “is still alive, well, and running in about one in four desktops.”

That operating system, of course, is macOS X, the direct descendent of NeXT’s NeXTSTEP. You could argue that macOS, based on the multi-threaded, multi-processing microkernel operating system Mach, BSD Unix, and the open source Darwin, is the most successful of all Unix operating systems.

Read more of this story at Slashdot.

Apple’s Large Language Model Shows Up in New iOS Code

An anonymous reader shares a report: Apple is widely expected to unveil major new artificial intelligence features with iOS 18 in June. Code found by 9to5Mac in the first beta of iOS 17.4 shows that Apple is continuing to work on a new version of Siri powered by large language model technology, with a little help from other sources. In fact, Apple appears to be using OpenAI’s ChatGPT API for internal testing to help the development of its own AI models. According to this code, iOS 17.4 includes a new SiriSummarization private framework that makes calls to the OpenAI’s ChatGPT API. This appears to be something Apple is using for internal testing of its new AI features. There are multiple examples of system prompts for the SiriSummarization framework in iOS 17.4 as well. This includes things like “please summarize,” “please answer this questions,” and “please summarize the given text.”

Apple is unlikely to use OpenAI models to power any of its artificial intelligence features in iOS 18. Instead, what it’s doing here is testing its own AI models against ChatGPT. For example, the SiriSummarization framework can do summarization using on-device models. Apple appears to be using its own AI models to power this framework, then internally comparing its results against the results of ChatGPT. In total, iOS 17.4 code suggests Apple is testing four different AI models. This includes Apple’s internal model called “Ajax,” which Bloomberg has previously reported. iOS 17.4 shows that there are two versions of AjaxGPT, including one that is processed on-device and one that is not.

Read more of this story at Slashdot.

The Great Freight-Train Heists of the 21st Century

Cargo theft from freight trains in the Los Angeles area has surged, with detectives estimating over 90 containers being opened daily and that theft on their freight trains in the Union Pacific area was up some 160 percent from the previous year. Nationally, cargo theft neared $1 billion in losses last year. Companies decline comment but California’s governor publicly questioned the widespread railroad theft. Most arrested were not organized; many were homeless people nearby opportunistically taking fallen boxes off tracks. Theft stems largely from e-commerce boom that reshaped freight shipping to meet consumer demand, opening vulnerabilities. Railroad police forces and online retailers aim to combat this but concede difficulty tracking stolen goods resold anonymously online. Some products stolen from containers even get resold back on Amazon. The New York Times Magazine: Sometimes products stolen out of Amazon containers are resold by third-party sellers back on Amazon in a kind of strange ouroboros, in which the snakehead of capitalism hungrily swallows its piracy tail. Last June, California’s attorney general created what was touted as a first-of-its-kind agreement among online retailers that committed them to doing a better job tracking, reporting and preventing stolen items from being resold on their platforms. While declining to comment on specific cases, a spokesperson for Amazon told me that the company is working to improve the process of vetting sellers: The number of “bad actor attempts” to create new selling accounts on Amazon decreased to 800,000 in 2022 from six million in 2020.

Read more of this story at Slashdot.

FTC Launches Inquiry Into AI Deals by Tech Giants

The Federal Trade Commission launched an inquiry (non-paywalled link) on Thursday into the multibillion-dollar investments by Microsoft, Amazon and Google in the artificial intelligence start-ups OpenAI and Anthropic, broadening the regulator’s efforts to corral the power the tech giants can have over A.I. The New York Times: These deals have allowed the big companies to form deep ties with their smaller rivals while dodging most government scrutiny. Microsoft has invested billions of dollars in OpenAI, the maker of ChatGPT, while Amazon and Google have each committed billions of dollars to Anthropic, another leading A.I. start-up.

Regulators have typically focused on bringing antitrust lawsuits against deals where the tech giants are buying rivals outright or using acquisitions to expand into new businesses, leading to increased prices and other harm, and have not regularly challenged stakes that the companies buy in start-ups. The F.T.C.’s inquiry will examine how these investment deals alter the competitive landscape and could inform any investigations by federal antitrust regulators into whether the deals have broken laws.

The F.T.C. said it would ask Microsoft, OpenAI, Amazon, Google and Anthropic to describe their influence over their partners and how they worked together to make decisions. It also said it would demand that they provide any internal documents that could shed light on the deals and their potential impact on competition.

Read more of this story at Slashdot.

Cruise Says Hostility Toward Regulators Led To Grounding of Its Autonomous Cars

Cruise, the driverless car subsidiary of General Motors, said in a report on Thursday that an adversarial approach taken (non-paywalled link) by its top executives toward regulators had led to a cascade of events that ended with a nationwide suspension of Cruise’s fleet. From a report: The roughly 100-page report was compiled by a law firm that Cruise hired to investigate whether its executives had misled California regulators about an October crash in San Francisco in which a Cruise vehicle dragged a woman 20 feet. The investigation found that while the executives had not intentionally misled state officials, they had failed to explain key details about the incident. In meetings with regulators, the executives let a video of the crash “speak for itself” rather than fully explain how one of its vehicles severely injured the pedestrian. The executives later fixated on protecting Cruise’s reputation rather than giving a full account of the accident to the public and media, according to the report, which was written by the Quinn Emanuel Urquhart & Sullivan law firm.

The company said that the Justice Department and the Securities and Exchange Commission were investigating the incident, as well as state agencies and the National Highway Traffic Safety Administration. The report is central to Cruise’s efforts to regain the public’s trust and eventually restart its business. Cruise has been largely shut down since October, when the California Department of Motor Vehicles suspended its license to operate because its vehicles were unsafe. It responded by pulling its driverless cars off the road across the country, laying off a quarter of its staff and replacing Kyle Vogt, its co-founder and chief executive, who resigned in November, with new leaders.

Read more of this story at Slashdot.

OpenAI Quietly Scrapped a Promise To Disclose Key Documents To the Public

From its founding, OpenAI said its governing documents were available to the public. When WIRED requested copies after the company’s boardroom drama, it declined to provide them. Wired: Wealthy tech entrepreneurs including Elon Musk launched OpenAI in 2015 as a nonprofit research lab that they said would involve society and the public in the development of powerful AI, unlike Google and other giant tech companies working behind closed doors. In line with that spirit, OpenAI’s reports to US tax authorities have from its founding said that any member of the public can review copies of its governing documents, financial statements, and conflict of interest rules. But when WIRED requested those records last month, OpenAI said its policy had changed, and the company provided only a narrow financial statement that omitted the majority of its operations.

“We provide financial statements when requested,” company spokesperson Niko Felix says. “OpenAI aligns our practices with industry standards, and since 2022 that includes not publicly distributing additional internal documents.” OpenAI’s abandonment of the long-standing transparency pledge obscures information that could shed light on the recent near-implosion of a company with crucial influence over the future of AI and could help outsiders understand its vulnerabilities. In November, OpenAI’s board fired CEO Sam Altman, implying in a statement that he was untrustworthy and had endangered its mission to ensure AI “benefits all humanity.” An employee and investor revolt soon forced the board to reinstate Altman and eject most of its own members, with an overhauled slate of directors vowing to review the crisis and enact structural changes to win back the trust of stakeholders.

Read more of this story at Slashdot.