Museum Restores 21 Rare Videos from Legendary 1976 Computing Conference

At Silicon Valley’s Computer History Museum, the senior curator just announced the results of a multi-year recovery and restoration process: making available 21 never-before-seen video recordings of a legendary 1976 conference:

For five summer days in 1976, the first generation of computer rock stars had its own Woodstock. Coming from around the world, dozens of computing’s top engineers, scientists, and software pioneers got together to reflect upon the first 25 years of their discipline in the warm, sunny (and perhaps a bit unsettling) climes of the Los Alamos National Laboratories, birthplace of the atomic bomb.

Among the speakers:

– A young Donald Knuth on the early history of programming languages

– FORTRAN designer John Backus on programming in America in the 1950s — some personal perspectives

– Harvard’s Richard Milton Bloch (who worked with Grace Hopper in 1944)

– Mathematician/nuclear physicist StanisÅaw M. Ulam on the interaction of mathematics and computing
– Edsger W. Dijkstra on “a programmer’s early memories.”

The Computer History Museum teases some highlights:

Typical of computers of this generation, the 1946 ENIAC, the earliest American large-scale electronic computer, had to be left powered up 24 hours a day to keep its 18,000 vacuum tubes healthy. Turning them on and off, like a light bulb, shortened their life dramatically. ENIAC co-inventor John Mauchly discusses this serious issue….
The Los Alamos peak moment was the brilliant lecture on the British WW II Colossus computing engines by computer scientist and historian of computing Brian Randell. Colossus machines were special-purpose computers used to decipher messages of the German High Command in WW II. Based in southern England at Bletchley Park, these giant codebreaking machines regularly provided life-saving intelligence to the allies. Their existence was a closely-held secret during the war and for decades after. Randell’s lecture was — excuse me — a bombshell, one which prompted an immediate re-assessment of the entire history of computing. Observes conference attendee (and inventor of ASCII) IBM’s Bob Bemer, “On stage came Prof. Brian Randell, asking if anyone had ever wondered what Alan Turing had done during World War II? From there he went on to tell the story of Colossus — that day at Los Alamos was close to the first time the British Official Secrets Act had permitted any disclosures. I have heard the expression many times about jaws dropping, but I had really never seen it happen before.”

Publishing these original primary sources for the first time is part of CHM’s mission to not only preserve computing history but to make it come alive. We hope you will enjoy seeing and hearing from these early pioneers of computing.

Read more of this story at Slashdot.

Should IT Professionals Be Liable for Ransomware Attacks?

Denmark-based Poul-Henning Kamp describes himself as the “author of a lot of FreeBSD, most of Varnish and tons of other Open Source Software.” And he shares this message in June’s Communications of the ACM.

“The software industry is still the problem.”
If any science fiction author, famous or obscure, had submitted a story where the plot was “modern IT is a bunch of crap that organized crime exploits for extortion,” it would have gotten nowhere, because (A) that is just not credible, and (B) yawn!

And yet, here we are…. As I write this, 200-plus corporations, including many retail chains, have inoperative IT because extortionists found a hole in some niche, third-party software product most of us have never heard of.

But he’s also proposing a solution.
In Denmark, 129 jobs are regulated by law. There are good and obvious reasons why it is illegal for any random Ken, Brian, or Dennis to install toilets or natural-gas furnaces, perform brain surgery, or certify a building is strong enough to be left outside during winter. It may be less obvious why the state cares who runs pet shops, inseminates cattle, or performs zoological taxidermy, but if you read the applicable laws, you will learn that animal welfare and protection of endangered species have many and obscure corner cases.

Notably absent, as in totally absent, on that list are any and all jobs related to IT; IT architecture, computers, computer networks, computer security, or protection of privacy in computer systems. People who have been legally barred and delicensed from every other possible trade — be it for incompetence, fraud, or both — are entirely free to enter the IT profession and become responsible for the IT architecture or cybersecurity of the IT system that controls nearly half the hydrocarbons to the Eastern Seaboard of the U.S….

With respect to gas, water, electricity, sewers, or building stability, the regulations do not care if a company is hundreds of years old or just started this morning, the rules are always the same: Stuff should just work, and only people who are licensed — because they know how to — are allowed to make it work, and they can be sued if they fail to do so.

The time is way overdue for IT engineers to be subject to professional liability, like almost every other engineering profession. Before you tell me that is impossible, please study how the very same thing happened with electricity, planes, cranes, trains, ships, automobiles, lifts, food processing, buildings, and, for that matter, driving a car.

As with software product liability, the astute reader is apt to exclaim, “This will be the end of IT as we know it!” Again, my considered response is, “Yes, please, that is precisely my point!”

Read more of this story at Slashdot.

Is GitHub Suspending the Accounts of Russian Developers at Sanctioned Companies?

“Russian software developers are reporting that their GitHub accounts are being suspended without warning if they work for or previously worked for companies under U.S. sanctions, writes Bleeping Computer:

According to Russian media outlets, the ban wave began on April 13 and didn’t discriminate between companies and individuals. For example, the GitHub accounts of Sberbank Technology, Sberbank AI Lab, and the Alfa Bank Laboratory had their code repositories initially disabled and are now removed from the platform…. Personal accounts suspended on GitHub have their content wiped while all repositories become immediately out of reach, and the same applies to issues and pull requests.

Habr.com [a Russian collaborative blog about IT] reports that some Russian developers contacted GitHub about the suspension and received an email titled ‘GitHub and Trade Controls’ that explained their account was disabled due to US sanctions. This email contains a link to a GitHub page explaining the company’s policies regarding sanctions and trade controls, which explains how a user can appeal their suspension. This appeal form requires the individual to certify that they do not use their GitHub account on behalf of a sanctioned entity. A developer posted to Twitter saying that he could remove the suspension after filling out the form and that it was due to his previous employer being sanctioned.
A GitHub blog post in March had promised to ensure the availability of open source services “to all, including developers in Russia.” So Bleeping Computer contacted a GitHub spokesperson, who explained this weekend that while GitHub may be required to restrict some users to comply with U.S. laws, “We examine government sanctions thoroughly to be certain that users and customers are not impacted beyond what is required by law.”
According to this, the suspended private accounts are either affiliated, collaborating, or working with/for sanctioned entities. However, even those who previously worked for a sanctioned company appear to be suspended by mistake.

This means that Russian users, in general, can suddenly find their projects wiped and accounts suspended, even if those projects have nothing to do with the sanctioned entities.

Read more of this story at Slashdot.

‘Biggest Change Ever’ to Go Brings Generics, Native Fuzzing, and a Performance Boost

“Supporting generics has been Go’s most often requested feature, and we’re proud to deliver the generic support that the majority of users need today,” the Go blog announced this week. *

It’s part of what Go’s development team is calling the “biggest change ever to the language”.

SiliconANGLE writes that “Right out of the gate, Go 1.18 is getting a CPU speed performance boost of up to 20% for Apple M1, ARM64 and PowerPC64 chips. This is all from an expansion of Go 1.17’s calling conventions for the application binary interface on these processor architectures.”

And Go 1.18 also introduces native support for fuzz testing — the first major programming language to do so, writes ZDNet:

As Google explains, fuzz testing or ‘fuzzing’ is a means of testing the vulnerability of a piece of software by throwing arbitrary or invalid data at it to expose bugs and unknown errors. This adds an additional layer of security to Go’s code that will keep it protected as its functionality evolves — crucial as attacks on software continue to escalate both in frequency and complexity. “At Google we are committed to securing the online infrastructure and applications the world depends upon,” said Eric Brewer, VIP infrastructure at Google….
While other languages support fuzzing, Go is the first major programming language to incorporate it into its core toolchain, meaning — unlike other languages — third-party support integrations aren’t required.

Google is emphasizing Go’s security features — and its widespread adoption. ZDNet writes:

Google created Go in 2007 and was designed specifically to help software engineers build secure, open-source enterprise applications for modern, multi-core computing systems. More than three-quarters of Cloud Native Computing Foundation projects, including Kubernetes and Istio, are written in Go, says Google. [Also Docker and Etc.] According to data from Stack Overflow, some 10% of developers are writing in Go worldwide, and there are signs that more recruiters are seeking out Go coders in their search for tech talent….. “Although we have a dedicated Go team at Google, we welcome a significant amount of contributions from our community. It’s a shared effort, and with their updates we’re helping our community achieve Go’s long-term vision.
Or, as the Go blog says:

We want to thank every Go user who filed a bug, sent in a change, wrote a tutorial, or helped in any way to make Go 1.18 a reality. We couldn’t do it without you. Thank you.

Enjoy Go 1.18!

* Supporting generics “includes major — but fully backward-compatible — changes to the language,” explains the release notes. Although it adds a few cautionary notes:

These new language changes required a large amount of new code that has not had significant testing in production settings. That will only happen as more people write and use generic code. We believe that this feature is well implemented and high quality. However, unlike most aspects of Go, we can’t back up that belief with real world experience. Therefore, while we encourage the use of generics where it makes sense, please use appropriate caution when deploying generic code in production.

While we believe that the new language features are well designed and clearly specified, it is possible that we have made mistakes…. it is possible that there will be code using generics that will work with the 1.18 release but break in later releases. We do not plan or expect to make any such change. However, breaking 1.18 programs in future releases may become necessary for reasons that we cannot today foresee. We will minimize any such breakage as much as possible, but we can’t guarantee that the breakage will be zero.

Read more of this story at Slashdot.

Researchers Release ‘PolyCoder’, the First Open-Source Code-Generating AI Model

“Code generation AI — AI systems that can write in different programming languages given a prompt — promise to cut development costs while allowing coders to focus on creative, less repetitive tasks,” writes VentureBeat.

“But while research labs like OpenAI and Alphabet-backed DeepMind have developed powerful code-generating AI, many of the most capable systems aren’t available in open source.”

For example, the training data for OpenAI’s Codex, which powers GitHub’s Copilot feature, hasn’t been made publicly available, preventing researchers from fine-tuning the AI model or studying aspects of it such as interpretability.

To remedy this, researchers at Carnegie Mellon University — Frank Xu, Uri Alon, Graham Neubig, and Vincent Hellendoorn — developed PolyCoder, a model based on OpenAI’s GPT-2 language model that was trained on a database of 249 gigabytes of code across 12 programming languages. While PolyCoder doesn’t match the performance of top code generators in every task, the researchers claim that PolyCoder is able to write in C with greater accuracy than all known models, including Codex….

“Large tech companies aren’t publicly releasing their models, which is really holding back scientific research and democratization of such large language models of code,” the researchers said. “To some extent, we hope that our open-sourcing efforts will convince others to do the same. But the bigger picture is that the community should be able to train these models themselves. Our model pushed the limit of what you can train on a single server — anything bigger requires a cluster of servers, which dramatically increases the cost.”

Read more of this story at Slashdot.

Why Swift Creator Chris Lattner Stepped Down From Its Core Team This Week

The creator of Apple’s Swift programming language stayed involved in the Swift core team and Evolution community… until this week. Though he’d left Apple more than five years ago, “Swift is important to me, so I’ve been happy to spend a significant amount of time to help improve and steer it,” Lattner wrote in an explanatory comment on the Swift community forum. “This included the ~weekly core team meetings (initially in person, then over WebEx)…”

The tech news site DevClass notes Lattner is also “the mind behind compiler infrastructure project LLVM,” but reports that “Apparently, Lattner hasn’t been part of the [Swift] core team since autumn 2021, when he tried discussing what he perceived as a toxic meeting environment with project leadership after an especially noteworthy call made him take a break in summer.”

“[…] after avoiding dealing with it, they made excuses, and made it clear they weren’t planning to do anything about it. As such, I decided not to return,” Lattner wrote in his explanation post. Back then, he planned to keep participating via the Swift Evolution community “but after several discussions generating more heat than light, when my formal proposal review comments and concerns were ignored by the unilateral accepts, and the general challenges with transparency working with core team, I decided that my effort was triggering the same friction with the same people, and thus I was just wasting my time.”

Lattner had been the steering force behind Swift since the language’s inception in 2010. However, after leaving Apple in 2017 and handing over his project lead role, design premises like “single things that compose” seem to have fallen by the wayside, making the decision to move on completely easier for language-creator Lattner.

The article points out Lattner’s latest endeavour is AI infrastructure company Modular.AI.

And Lattner wrote in his comment that Swift’s leadership “reassures me they ‘want to make sure things are better for others in the future based on what we talked about’ though….”
Swift has a ton of well meaning and super talented people involved in and driving it. They are trying to be doing the best they can with a complicated situation and many pressures (including lofty goals, fixed schedules, deep bug queues to clear, internal folks that want to review/design things before the public has access to them, and pressures outside their team) that induce odd interactions with the community. By the time things get out to us, the plans are already very far along and sometimes the individuals are attached to the designs they’ve put a lot of energy into. This leads to a challenging dynamic for everyone involved.

I think that Swift is a phenomenal language and has a long and successful future ahead, but it certainly isn’t a community designed language, and this isn’t ambiguous. The new ideas on how to improve things sounds promising — I hope they address the fundamental incentive system challenges that the engineers/leaders face that cause the symptoms we see. I think that a healthy and inclusive community will continue to benefit the design and evolution of Swift.

DevClass also reported on the aftermath:
Probably as a consequence of the move, the Swift core team is currently looking to restructure project leadership. According to Swift project lead Ted Kremenek… “The intent is to free the core team to invest more in overall project stewardship and create a larger language workgroup that can incorporate more community members in language decisions.”

Kremenek also used the announcement to thank Lattner for his leadership throughout the formative years of the project, writing “it has been one of the greatest privileges of my life to work with Chris on Swift.”

In 2017 Chris Lattner answered questions from Slashdot’s readers.

Read more of this story at Slashdot.

Programming in Rust is Fun – But Challenging, Finds Annual Community Survey

Respondents to the annual survey of the Rust community reported an uptick in weekly usage and challenges, writes InfoWorld:

Among those surveyed who are using Rust, 81% were using the language on at least a weekly basis, compared to 72% in last year’s survey. Of all Rust users, 75% said they are able to write production-ready code but 27% said it was at times a struggle to write useful, production-ready code…. While the survey pointed toward a growing, healthy community of “Rustaceans,” it also found challenges. In particular, Rust users would like to see improvements in compile times, disk usage, debugging, and GUI development…

– For those who adopted Rust at work, 83% found it “challenging.” But it was unclear how much of this was a Rust-specific issue or general challenges posed by adopting a new language. During adoption, only 13% of respondents believed the language was slowing their team down while 82% believed Rust helped their teams achieve their goals.
– Of the respondents using Rust, 59% use it at least occasionally at work and 23% use it for the majority of their coding. Last year, only 42% used Rust at work.
From the survey’s results:
After adoption, the costs seem to be justified: only 1% of respondents did not find the challenge worth it while 79% said it definitely was. When asked if their teams were likely to use Rust again in the future, 90% agreed. Finally, of respondents using Rust at work, 89% of respondents said their teams found it fun and enjoyable to program.
As for why respondents are using Rust at work, the top answer was that it allowed users “to build relatively correct and bug free software” with 96% of respondents agreeing with that statement. After correctness, performance (92%) was the next most popular choice. 89% of respondents agreed that they picked Rust at work because of Rust’s much-discussed security properties.
Overall, Rust seems to be a language ready for the challenges of production, with only 3% of respondents saying that Rust was a “risky” choice for production use.

Thanks to Slashdot reader joshuark for submitting the story…

Read more of this story at Slashdot.

Library Intentionally Corrupted by Developer Relaunches as a Community-Driven Project

Last weekend a developer intentionally corrupted two of his libraries which collectively had more than 20 million weekly downloads and thousands of dependent projects.

Eight days later, one of those libraries has become a community controlled project.

Some highlights from the announcement at fakerjs.dev:

We’re a group of engineers who were using Faker in prod when the main package was deleted. We have eight maintainers currently….

What has the team done so far?

1. Created a GitHub org [repository] for the new Faker package under @faker-js/faker.
2. Put together a team of eight maintainers.
3. Released all previous versions of Faker at @faker-js/faker on npm.
4. Released the Version 6 Alpha
5. Almost completed migrating to TypeScript so that DefinitelyTyped no longer needs to maintain its external @types/faker package.
6. Created a public Twitter account for communicating with the community.
7. Released the first official Faker documentation website….

Faker has never had an official docs website and the awesome Jeff Beltran has been maintaining a project called “Un-Official faker.js Documentation” for the last 3 years.

He gave us permission to re-use his work to create fakerjs.dev

8. Cleaned up tooling like Prettier, CI, Netlify Deploy Previews, and GitHub Actions.

9. Done a TON of issue triage and many, many PR reviews.
10. We’ve gotten in contact with the Open Collective and discussed a transition plan for the project.

We fully intend to extend Faker, continuously develop it, and make it even better.

As such, we will work on a roadmap after we release 6.x and merge all of the TypeScript Pull Requests in the next week….

We’re now turning Faker into a community-controlled project currently maintained by eight engineers from various backgrounds and companies….

We’re excited to give new life to this idea and project.

This project can have a fresh start and it will become even cooler.

We felt we needed to do a public announcement because of all of the attention the project received in the media and from the community.

We believe that we have acted in the way that is best for the community.

According to the announcement, they’ve now also forked the funding so the project’s original sponsors can continue to support the community-driven development in the future, while the original developers Marak and Brian “were able to retain the $11,652.69 USD previously donated to the project.”

Friday the official Twitter account for the new community project announced “It’s been a week. We’ve merged all of the active forks. Currently at 1532 stars. Looks like everything is settling.” [It’s now up to over 1,800 stars.]

One of the new maintainers has posted on Twitter, “I’m just grateful to the faker community that willed itself into existence and stepped up.”

Read more of this story at Slashdot.

GitHub Restores Account of Developer Who Intentionally Corrupted His Libraries

What happened after a developer intentionally corrupted two of their libraries which collectively had more than 20 million weekly downloads and thousands of dependent projects?

Mike Melanson’s “This Week in Programming” column reports:

In response to the corrupted libraries, Microsoft quickly suspended his GitHub access and reverted the projects on npm…. While this might seem like an open and shut case to some — the developer committed malicious code and GitHub and npm did what it had to do to protect its users — a debate broke out around a developer’s rights to do what they wish with their code, no matter how many projects and dependencies it may have.

“GitHub suspending someone’s account for modifying their own code in a project they own however they want spooks me a lot more than NPM reverting a package,” [tweeted one company’s Director of Engineering & Technology]. “I kind of love what Marak did to make a point and protest to be honest.”

An article on iProgrammer further outlines the dilemma present in what might otherwise seem like a clear-cut case…. “Yes, it is open source in that you can fork it and can contribute to it but does this mean that GitHub is justified in denying you the right to change or even destroy your own code?”

As of last night, however, it would appear that the entire affair is merely one for intellectual debate, as GitHub has indeed lived up to what some might view as its end of the bargain: the developer’s account is active, he has been allowed to remove his faker.js library on GitHub (depended upon as it might be), and has since offered an update that he does “not have Donkey Brains”.

Read more of this story at Slashdot.