Developer Successfully Boots Up Linux on Google Drive

Its FOSS writes:
When it comes to Linux, we get to see some really cool, and sometimes quirky projects (read Hannah Montana Linux) that try to show off what’s possible, and that’s not a bad thing. One such quirky undertaking has recently surfaced, which sees a sophomore trying to one-up their friend, who had booted Linux off NFS. With their work, they have been able to run Arch Linux on Google Drive.

Their ultimate idea included FUSE (which allows running file-system code in userspace). The developer’s blog post explains that when Linux boots, “the kernel unpacks a temporary filesystem into RAM which has the tools to mount the real filesystem… it’s very helpful! We can mount a FUSE filesystem in that step and boot normally…. ”

Thankfully, Dracut makes it easy enough to build a custom initramfs… I decide to build this on top of Arch Linux because it’s relatively lightweight and I’m familiar with how it work.”
Doing testing in an Amazon S3 container, they built an EFI image — then spent days trying to enable networking… And the adventure continues. (“Would it be possible to manually switch the root without a specialized system call? What if I just chroot?”) After they’d made a few more tweaks, “I sit there, in front of my computer, staring. It can’t have been that easy, can it? Surely, this is a profane act, and the spirit of Dennis Ritchie ought’t’ve stopped me, right? Nobody stopped me, so I kept going…”

I build the unified EFI file, throw it on a USB drive under /BOOT/EFI, and stick it in my old server… This is my magnum opus. My Great Work. This is the mark I will leave on this planet long after I am gone: The Cloud Native Computer.
Despite how silly this project is, there are a few less-silly uses I can think of, like booting Linux off of SSH, or perhaps booting Linux off of a Git repository and tracking every change in Git using gitfs. The possibilities are endless, despite the middling usefulness.

If there is anything I know about technology, it’s that moving everything to The Cloud is the current trend. As such, I am prepared to commercialize this for any company wishing to leave their unreliable hardware storage behind and move entirely to The Cloud. Please request a quote if you are interested in True Cloud Native Computing.

Unfortunately, I don’t know what to do next with this. Maybe I should install Nix?

Read more of this story at Slashdot.

Amid Whistleblower Complaints, Boeing Buys Spirit, Ending Outsourcing of Key Work on Planes

Monday Boeing announced plans to acquire its key supplier, Spirit AeroSystems, for $4.7 billion, according to the Associated Press — “a move that it says will improve plane quality and safety amid increasing scrutiny by Congress, airlines and the Department of Justice. Boeing previously owned Spirit, and the purchase would reverse a longtime Boeing strategy of outsourcing key work on its passenger planes.”

But meanwhile, an anonymous reader shared this report from Newsweek:

More than a hundred Boeing whistleblowers have contacted the U.S. aviation watchdog since the start of the year, Newsweek can reveal. Official figures show that the Federal Aviation Administration’s (FAA) whistleblowing hotline has seen a huge surge of calls from workers concerned about safety problems. Since January the watchdog saw a total of 126 reports, via various channels, from workers concerned about safety problems. In 2023, there were just 11….

After a visit from FAA Administrator Mike Whitaker to a Boeing factory earlier in the year, Boeing CEO Dave Calhoun agreed to share details of the hotline with all Boeing employees. The FAA told Newsweek that the number of Boeing employees coming forward was a “sign of a healthy culture”…. Newsweek also spoke to Jon Holden, president of the 751 District for the International Association of Machinists, Boeing’s largest union which represents more than 32,000 aerospace workers. Holden said that numerous whistleblowers had complained to the FAA over Boeing’s attempt to cut staff and reduce inspections in an effort to “speed up the rate” at which planes went out the door…

Holden’s union is currently in contract negotiations with Boeing, and is attempting to secure a 40% pay rise alongside a 50-year guarantee of work security for its members.

CNN also reports on new allegations Wednesday from a former Boeing quality-control manager: that “for years workers at its 787 Dreamliner factory in Everett, Washington, routinely took parts that were deemed unsuitable to fly out of an internal scrap yard and put them back on factory assembly lines.”
In his first network TV interview, Merle Meyers, a 30-year veteran of Boeing, described to CNN what he says was an elaborate off-the-books practice that Boeing managers at the Everett factory used to meet production deadlines, including taking damaged and improper parts from the company’s scrapyard, storehouses and loading docks… Meyers’ claims that lapses he witnessed were intentional, organized efforts designed to thwart quality control processes in an effort to keep up with demanding production schedules. Beginning in the early 2000s, Meyers says that for more than a decade, he estimates that about 50,000 parts “escaped” quality control and were used to build aircraft. Those parts include everything from small items like screws to more complex assemblies like wing flaps. A single Boeing 787 Dreamliner, for example, has approximately 2.3 million parts…

Based on conversations Meyers says he had with current Boeing workers in the time since he left the company, he believes that while employees no longer remove parts from the scrapyard, the practice of using other unapproved parts in assembly lines continues. “Now they’re back to taking parts of body sections — everything — right when it arrives at the Everett site, bypassing quality, going right to the airplane,” Meyers said.

Company emails going back years show that Meyers repeatedly flagged the issue to Boeing’s corporate investigations team, pointing out what he says were blatant violations of Boeing’s safety rules. But investigators routinely failed to enforce those rules, Meyers says, even ignoring “eye witness observations and the hard work done to ensure the safety of future passengers and crew,” he wrote in an internal 2022 email provided to CNN.

Read more of this story at Slashdot.

Watch Volunteers Emerge After Living One Year in a Mars Simulation

They lived 378 days in a “mock Mars habitat” in Houston, reports Engadget. But today the four volunteers for NASA’s yearlong simulation will finally emerge from their 1,700-square-foot habitat at the Johnson Space Center that was 3D-printed from materials that could be created with Martian soil.

And you can watch the “welcome home” ceremony’s livestream starting at 5 p.m. EST on NASA TV (also embedded in Engadget’s story). More det ails from NASA:

For more than a year, the crew simulated Mars mission operations, including “Marswalks,” grew and harvested several vegetables to supplement their shelf-stable food, maintained their equipment and habitat, and operated under additional stressors a Mars crew will experience, including communication delays with Earth, resource limitations, and isolation.

One of the mission’s crew members told the Houston Chronicle they were “very excited to go back to ‘Earth,’ but of course there is a bittersweet aspect to it just like any time you reach the completion of something that has dominated one’s life for several years.”

Various crew members left behind their children or long-term partner for this once-in-a-lifetime experience, according to an earlier article, which also notes that NASA is paying the participants $10 per hour “for all waking hours, up to 16 hours per day. That’s as much as $60,480 for the 378-day mission.”

Engadget points out there are already plans for two more one-year “missions” — with the second one expected to begin next spring…

I’m curious. Would any Slashdot readers be willing to spend a year in a mock Mars habitat?

Read more of this story at Slashdot.

‘How Good Is ChatGPT at Coding, Really?’

IEEE Spectrum (the IEEE’s official publication) asks the question. “How does an AI code generator compare to a human programmer?”

A study published in the June issue of IEEE Transactions on Software Engineering evaluated the code produced by OpenAI’s ChatGPT in terms of functionality, complexity and security. The results show that ChatGPT has an extremely broad range of success when it comes to producing functional code — with a success rate ranging from anywhere as poor as 0.66 percent and as good as 89 percent — depending on the difficulty of the task, the programming language, and a number of other factors. While in some cases the AI generator could produce better code than humans, the analysis also reveals some security concerns with AI-generated code.

The study tested GPT-3.5 on 728 coding problems from the LeetCode testing platform — and in five programming languages: C, C++, Java, JavaScript, and Python. The results?

Overall, ChatGPT was fairly good at solving problems in the different coding languages — but especially when attempting to solve coding problems that existed on LeetCode before 2021. For instance, it was able to produce functional code for easy, medium, and hard problems with success rates of about 89, 71, and 40 percent, respectively. “However, when it comes to the algorithm problems after 2021, ChatGPT’s ability to generate functionally correct code is affected. It sometimes fails to understand the meaning of questions, even for easy level problems,” said Yutian Tang, a lecturer at the University of Glasgow. For example, ChatGPT’s ability to produce functional code for “easy” coding problems dropped from 89 percent to 52 percent after 2021. And its ability to generate functional code for “hard” problems dropped from 40 percent to 0.66 percent after this time as well…

The researchers also explored the ability of ChatGPT to fix its own coding errors after receiving feedback from LeetCode. They randomly selected 50 coding scenarios where ChatGPT initially generated incorrect coding, either because it didn’t understand the content or problem at hand. While ChatGPT was good at fixing compiling errors, it generally was not good at correcting its own mistakes… The researchers also found that ChatGPT-generated code did have a fair amount of vulnerabilities, such as a missing null test, but many of these were easily fixable.

“Interestingly, ChatGPT is able to generate code with smaller runtime and memory overheads than at least 50 percent of human solutions to the same LeetCode problems…”

Read more of this story at Slashdot.

New SnailLoad Attack Exploits Network Latency To Spy On Users’ Web Activities

Longtime Slashdot reader Artem S. Tashkinov shares a report from The Hacker News: A group of security researchers from the Graz University of Technology have demonstrated a new side-channel attack known as SnailLoad that could be used to remotely infer a user’s web activity. “SnailLoad exploits a bottleneck present on all Internet connections,” the researchers said in a study released this week. “This bottleneck influences the latency of network packets, allowing an attacker to infer the current network activity on someone else’s Internet connection. An attacker can use this information to infer websites a user visits or videos a user watches.” A defining characteristic of the approach is that it obviates the need for carrying out an adversary-in-the-middle (AitM) attack or being in physical proximity to the Wi-Fi connection to sniff network traffic. Specifically, it entails tricking a target into loading a harmless asset (e.g., a file, an image, or an ad) from a threat actor-controlled server, which then exploits the victim’s network latency as a side channel to determine online activities on the victim system.

To perform such a fingerprinting attack and glean what video or a website a user might be watching or visiting, the attacker conducts a series of latency measurements of the victim’s network connection as the content is being downloaded from the server while they are browsing or viewing. It then involves a post-processing phase that employs a convolutional neural network (CNN) trained with traces from an identical network setup to make the inference with an accuracy of up to 98% for videos and 63% for websites. In other words, due to the network bottleneck on the victim’s side, the adversary can deduce the transmitted amount of data by measuring the packet round trip time (RTT). The RTT traces are unique per video and can be used to classify the video watched by the victim. The attack is so named because the attacking server transmits the file at a snail’s pace in order to monitor the connection latency over an extended period of time.

Read more of this story at Slashdot.