Watch Volunteers Emerge After Living One Year in a Mars Simulation

They lived 378 days in a “mock Mars habitat” in Houston, reports Engadget. But today the four volunteers for NASA’s yearlong simulation will finally emerge from their 1,700-square-foot habitat at the Johnson Space Center that was 3D-printed from materials that could be created with Martian soil.

And you can watch the “welcome home” ceremony’s livestream starting at 5 p.m. EST on NASA TV (also embedded in Engadget’s story). More det ails from NASA:

For more than a year, the crew simulated Mars mission operations, including “Marswalks,” grew and harvested several vegetables to supplement their shelf-stable food, maintained their equipment and habitat, and operated under additional stressors a Mars crew will experience, including communication delays with Earth, resource limitations, and isolation.

One of the mission’s crew members told the Houston Chronicle they were “very excited to go back to ‘Earth,’ but of course there is a bittersweet aspect to it just like any time you reach the completion of something that has dominated one’s life for several years.”

Various crew members left behind their children or long-term partner for this once-in-a-lifetime experience, according to an earlier article, which also notes that NASA is paying the participants $10 per hour “for all waking hours, up to 16 hours per day. That’s as much as $60,480 for the 378-day mission.”

Engadget points out there are already plans for two more one-year “missions” — with the second one expected to begin next spring…

I’m curious. Would any Slashdot readers be willing to spend a year in a mock Mars habitat?

Read more of this story at Slashdot.

‘How Good Is ChatGPT at Coding, Really?’

IEEE Spectrum (the IEEE’s official publication) asks the question. “How does an AI code generator compare to a human programmer?”

A study published in the June issue of IEEE Transactions on Software Engineering evaluated the code produced by OpenAI’s ChatGPT in terms of functionality, complexity and security. The results show that ChatGPT has an extremely broad range of success when it comes to producing functional code — with a success rate ranging from anywhere as poor as 0.66 percent and as good as 89 percent — depending on the difficulty of the task, the programming language, and a number of other factors. While in some cases the AI generator could produce better code than humans, the analysis also reveals some security concerns with AI-generated code.

The study tested GPT-3.5 on 728 coding problems from the LeetCode testing platform — and in five programming languages: C, C++, Java, JavaScript, and Python. The results?

Overall, ChatGPT was fairly good at solving problems in the different coding languages — but especially when attempting to solve coding problems that existed on LeetCode before 2021. For instance, it was able to produce functional code for easy, medium, and hard problems with success rates of about 89, 71, and 40 percent, respectively. “However, when it comes to the algorithm problems after 2021, ChatGPT’s ability to generate functionally correct code is affected. It sometimes fails to understand the meaning of questions, even for easy level problems,” said Yutian Tang, a lecturer at the University of Glasgow. For example, ChatGPT’s ability to produce functional code for “easy” coding problems dropped from 89 percent to 52 percent after 2021. And its ability to generate functional code for “hard” problems dropped from 40 percent to 0.66 percent after this time as well…

The researchers also explored the ability of ChatGPT to fix its own coding errors after receiving feedback from LeetCode. They randomly selected 50 coding scenarios where ChatGPT initially generated incorrect coding, either because it didn’t understand the content or problem at hand. While ChatGPT was good at fixing compiling errors, it generally was not good at correcting its own mistakes… The researchers also found that ChatGPT-generated code did have a fair amount of vulnerabilities, such as a missing null test, but many of these were easily fixable.

“Interestingly, ChatGPT is able to generate code with smaller runtime and memory overheads than at least 50 percent of human solutions to the same LeetCode problems…”

Read more of this story at Slashdot.

New SnailLoad Attack Exploits Network Latency To Spy On Users’ Web Activities

Longtime Slashdot reader Artem S. Tashkinov shares a report from The Hacker News: A group of security researchers from the Graz University of Technology have demonstrated a new side-channel attack known as SnailLoad that could be used to remotely infer a user’s web activity. “SnailLoad exploits a bottleneck present on all Internet connections,” the researchers said in a study released this week. “This bottleneck influences the latency of network packets, allowing an attacker to infer the current network activity on someone else’s Internet connection. An attacker can use this information to infer websites a user visits or videos a user watches.” A defining characteristic of the approach is that it obviates the need for carrying out an adversary-in-the-middle (AitM) attack or being in physical proximity to the Wi-Fi connection to sniff network traffic. Specifically, it entails tricking a target into loading a harmless asset (e.g., a file, an image, or an ad) from a threat actor-controlled server, which then exploits the victim’s network latency as a side channel to determine online activities on the victim system.

To perform such a fingerprinting attack and glean what video or a website a user might be watching or visiting, the attacker conducts a series of latency measurements of the victim’s network connection as the content is being downloaded from the server while they are browsing or viewing. It then involves a post-processing phase that employs a convolutional neural network (CNN) trained with traces from an identical network setup to make the inference with an accuracy of up to 98% for videos and 63% for websites. In other words, due to the network bottleneck on the victim’s side, the adversary can deduce the transmitted amount of data by measuring the packet round trip time (RTT). The RTT traces are unique per video and can be used to classify the video watched by the victim. The attack is so named because the attacking server transmits the file at a snail’s pace in order to monitor the connection latency over an extended period of time.

Read more of this story at Slashdot.