Nintendo ‘Hacker’ Gary Bowser Released From Federal Prison

An anonymous reader quotes a report from TorrentFreak: Last year, a U.S. federal court handed a 40-month prison sentence to Gary Bowser. The Canadian pleaded guilty to being part of the Nintendo hacking group “Team Xecuter” and has now served his time. In part due to his good behavior, Bowser got an early release from federal prison. […] In a recent video interview with Nick Moses, Bowser explains that he was released from federal prison on March 28th. He is currently in processing at the Northwest Detention Center in Tacoma, Washington, to prepare for his return to Canada.

What his life will look like in Canada remains uncertain. However, in federal prison, Bowser has shown that he doesn’t shy away from putting in work and helping other people in need. Aside from his prison job, he spent several nightly hours on suicide watch. The prison job brought in some meager income, a large part of which went to pay for the outstanding restitution he has to pay, which is $14.5 million in total. Thus far, less than $200 has been paid off. “I’ve been making payments of $25 per month, which they’ve been taking from my income because I had a job in federal prison. So far I paid $175,” Bowser tells Nick Moses.

If Bowser manages to find a stable source of income in Canada, Nintendo will get a chunk of that as well. As part of a consent judgment, he agreed to pay $10 million to Nintendo, which is the main restitution priority. “The agreement with them is that the maximum they can take is 25 to 30 percent of your gross monthly income. And I have up to six months before I have to start making payments,” Bowser notes. At that rate, it is unlikely that Nintendo will ever see the full amount. Or put differently, Bowser will carry the financial consequences of his Team-Xecuter involvement for the rest of his life.

Read more of this story at Slashdot.

The Rise of DOOM Chronicled on Retro Site for ‘Shareware Heroes’ Book

SharewareHeroes.com recreates all the fonts and cursor you’d see after dialing up a local bulletin-board system in the early 1990s. It’s to promote a new book — successfully crowdfunded by 970 backers — to chronicle “a critical yet long overlooked chapter in video game history: the rise and eventual fall of the shareware model.

The book promises to explore “a hidden games publishing market” that for several years “had no powerful giants,” with games instead distributed “across the nascent internet for anyone to enjoy (and, if they liked it enough, pay for).”

And the site features a free excerpt from the chapter about DOOM:
It seemed there was no stopping id Software. Commander Keen had given them their freedom, and Wolfenstein 3D’s mega-success had earned them the financial cushion to do anything. But all they wanted was to beat the last game — to outdo both themselves and everyone else. And at the centre of that drive was a push for ever-better technology. By the time Wolfenstein 3D’s commercial prequel Spear of Destiny hit retail shelves, John Carmack had already built a new engine.

This one had texture-mapped floors and ceilings — not just walls. It supported diminished lighting, which meant things far away could recede into the shadows, disappearing into the distance. And it had variable-height rooms, allowing for elevated platforms where projectile-throwing enemies could hang out, and most exciting of all it allowed for non-orthogonal walls — which meant that rooms could be odd-shaped, with walls jutting out at any arbitrary angle from each other, rather than the traditional rectangular boxed design that had defined first-person-perspective games up until then.

It ran at half the speed of Wolfenstein 3D’s engine, but they were thinking about doing a 3D Keen game next — so that wouldn’t matter. At least not until they saw it in action. Everyone but Tom Hall suddenly got excited about doing another shooter, which meant Carmack would have to optimise the hell out of his engine to restore that sense of speed. Briefly they considered a proposal from 20th Century Fox to do a licensed Aliens shooter, but they didn’t like the idea of giving up their creative independence, so they considered how they could follow up Wolfenstein 3D with something new. Fighting aliens in space is old hat. This time it could be about fighting demons in space. This time it could be called DOOM.
The book’s title is Shareware Heroes: The Renegades Who Redefined Gaming at the Dawn of the Internet — here’s a page listing the people interviewed, as well as the book’s table of contents.

And this chapter culminates with what happened when the first version of DOOM was finally released. “BBSs and FTP servers around America crashed under the immense load of hundreds of thousands of people clamouring to download the game on day one.

“Worse for universities around the country, people were jumping straight into the multiplayer once they had the game — and they kept crashing the university networks…”

Read more of this story at Slashdot.

What Happens When You Put 25 ChatGPT-Backed Agents Into an RPG Town?

“A group of researchers at Stanford University and Google have created a miniature RPG-style virtual world similar to The Sims,” writes Ars Technica, “where 25 characters, controlled by ChatGPT and custom code, live out their lives independently with a high degree of realistic behavior.”

“Generative agents wake up, cook breakfast, and head to work; artists paint, while authors write; they form opinions, notice each other, and initiate conversations; they remember and reflect on days past as they plan the next day,” write the researchers in their paper… To pull this off, the researchers relied heavily on a large language model for social interaction, specifically the ChatGPT API. In addition, they created an architecture that simulates minds with memories and experiences, then let the agents loose in the world to interact…. To study the group of AI agents, the researchers set up a virtual town called “Smallville,” which includes houses, a cafe, a park, and a grocery store…. Interestingly, when the characters in the sandbox world encounter each other, they often speak to each other using natural language provided by ChatGPT. In this way, they exchange information and form memories about their daily lives.

When the researchers combined these basic ingredients together and ran the simulation, interesting things began to happen. In the paper, the researchers list three emergent behaviors resulting from the simulation. None of these were pre-programmed but rather resulted from the interactions between the agents. These included “information diffusion” (agents telling each other information and having it spread socially among the town), “relationship memory” (memory of past interactions between agents and mentioning those earlier events later), and “coordination” (planning and attending a Valentine’s Day party together with other agents)…. “Starting with only a single user-specified notion that one agent wants to throw a Valentine’s Day party,” the researchers write, “the agents autonomously spread invitations to the party over the next two days, make new acquaintances, ask each other out on dates to the party, and coordinate to show up for the party together at the right time….”

To get a look at Smallville, the researchers have posted an interactive demo online through a special website, but it’s a “pre-computed replay of a simulation” described in the paper and not a real-time simulation. Still, it gives a good illustration of the richness of social interactions that can emerge from an apparently simple virtual world running in a computer sandbox.
Interstingly, the researchers hired human evaluators to gauge how well the AI agents produced believable responses — and discovered they were more believable than when supplied their own responses.

Thanks to long-time Slashdot reader Baron_Yam for sharing the article.

Read more of this story at Slashdot.

Netflix Password Sharing Crackdown To Expand To US In Q2 2023

Netflix is planning a “broad rollout” of the password sharing crackdown that it began implementing in 2022, the company said today in its Q1 2023 earnings report (PDF). MacRumors reports: The “paid sharing” plan that Netflix has been testing in a limited number of countries will expand to additional countries in the second quarter, including the United States. Netflix said that it was “pleased with the results” of the password sharing restrictions that it implemented in Canada, New Zealand, Spain, and Portugal earlier this year. Netflix initially planned to start eliminating password sharing in the United States in the first quarter of the year, but the company said that it had learned from its tests and “found opportunities to improve the experience for members.” There is a “cancel reaction” expected in each market where paid sharing is implemented, but increased revenue comes later as borrowers activate their own Netflix accounts and existing members add “extra member” accounts.

In Canada, paid sharing resulted in a larger Netflix membership base and an acceleration in revenue growth, which has given Netflix the confidence to expand it to the United States. When Netflix brings its paid sharing rules to the United States, multi-household account use will no longer be permitted. Netflix subscribers who share an account with those who do not live with them will need to pay for an additional member. In Canada, Netflix charges $7.99 CAD for an extra member, which is around $6. […] Netflix claims that more than 100 million households are sharing accounts, which is impacting its ability to “invest in and improve Netflix” for paying members.

Read more of this story at Slashdot.

YouTube TV Nabs Its First Technical Emmy Win For ‘Views’ Feature

YouTube TV just won its first Technical Emmy award for its “Views” suite of features, which lets users access sports highlights, key plays, player stats and game scores. TechCrunch reports: At the 74th annual Technology & Engineering Emmy Awards last night, YouTube TV was declared the winner for the category “AI-ML Curation of Sports Highlights.” The tech company also announced today that Key Plays reached a notable milestone — the feature was used in over 10 million watch sessions on the platform. Last year, viewers used key plays the most during the World Cup, regular season NFL games and Premier League matches.

The Key Plays view tracks important plays in a game. Users can tap on the plays to rewatch when it occurs in the game. This is helpful for users that missed a live game and want to catch up on key moments. When YouTube TV launched Views in 2018, it was only available for baseball, basketball, football and hockey. Soccer and golf were added later on. The suite of features was also initially limited to phones and tablets. Today, the feature is available within the YouTube TV app across smart TVs and mobile devices.

In addition to Stats, Key Plays and Scores View, there’s also Fantasy Football View, which is a mobile-only feature and lets users link their existing fantasy football account. That way, when a user is watching NFL games on YouTube TV, the feature allows them to see how their team is performing in real time. Plus, there’s a “Jump to” function for users to quickly access a segment they want to view, which is especially handy for tennis fans and for users watching the Olympics. “Views came out of a team brainstorm about five years ago and launched about a year after YouTube TV,” said Kathryn Cochrane, YouTube TV’s group project manager, in a company blog post. “A lot of our viewers are devoted sports fans, and we found that when they watch sports, they aren’t just looking at what’s on the big screen. They were also actively on their phones, finding more details such as stats for their fantasy football league, updates from other games, and more, all to enhance what they were already watching.”

Read more of this story at Slashdot.

New MacBooks, a Big New WatchOS Update, and Apple’s Mixed Reality Headset To Be Announced At WWDC

In addition to the company’s long-rumored mixed reality headset, Apple is expected to launch new MacBooks, as well as a “major” update to the Apple Watch’s watchOS software at its Worldwide Developers Conference (WWDC) in June. All told, WWDC 2023 could end up being one of Apple’s “biggest product launch events ever,” according to Bloomberg’s Mark Gurman. The Verge reports: Let’s start with the Macs. Gurman doesn’t explicitly say which macOS-powered computers Apple could announce in June, but lists around half a dozen devices it currently plans to release this year or early 2024. There’s an all new 15-inch MacBook Air, an updated 13-inch MacBook Air, and new 13-inch and “high-end” MacBook Pros. Meanwhile on the Mac side Apple still needs to replace its last Intel-powered device, the Mac Pro, with an Apple Silicon model, and it also reportedly has plans to refresh its all-in-one 24-inch iMac.

Bloomberg’s report notes that “at least some of the new laptops” will make an appearance. The bad news is that none are likely to run Apple’s next-generation M3 chips, and will instead ship with M2-era processors. Apple apparently also has a couple of new Mac Studio computers in development, but Bloomberg is less clear on when they could launch.

Over on the software side, which is WWDC’s traditional focus, watchOS will reportedly receive a “major” update that includes a revamped interface. Otherwise, we could be in for a relatively quiet show on the operating system front as iOS, iPadOS, macOS, and tvOS are not expected to receive major updates this year. Gurman does say that work to allow sideloading on iOS to comply with upcoming EU legislation is ongoing.

Read more of this story at Slashdot.

How Google’s ‘Don’t Be Evil’ Motto Has Evolved For the AI Age

In a special report for CBS News’ 60 Minutes, Google CEO Sundar Pichai shares his concerns about artificial intelligence and why the company is choosing to not release advanced models of its AI chatbot. From the report: When Google filed for its initial public offering in 2004, its founders wrote that the company’s guiding principle, “Don’t be evil” was meant to help ensure it did good things for the world, even if it had to forgo some short term gains. The phrase remains in Google’s code of conduct. Pichai told 60 Minutes he is being responsible by not releasing advanced models of Bard, in part, so society can get acclimated to the technology, and the company can develop further safety layers.

One of the things Pichai told 60 Minutes that keeps him up at night is Google’s AI technology being deployed in harmful ways. Google’s chatbot, Bard, has built in safety filters to help combat the threat of malevolent users. Pichai said the company will need to constantly update the system’s algorithms to combat disinformation campaigns and detect deepfakes, computer generated images that appear to be real. As Pichai noted in his 60 Minutes interview, consumer AI technology is in its infancy. He believes now is the right time for governments to get involved.

“There has to be regulation. You’re going to need laws … there have to be consequences for creating deep fake videos which cause harm to society,” Pichai said. “Anybody who has worked with AI for a while … realize[s] this is something so different and so deep that, we would need societal regulations to think about how to adapt.” Adaptation that is already happening around us with technology that Pichai believes, “will be more capable “anything we’ve ever seen before.” Soon it will be up to society to decide how it’s used and whether to abide by Alphabet’s code of conduct and, “Do the right thing.”

Read more of this story at Slashdot.

How Should AI Be Regulated?

A New York Times opinion piece argues people in the AI industry “are desperate to be regulated, even if it slows them down. In fact, especially if it slows them down.” But how?

What they tell me is obvious to anyone watching. Competition is forcing them to go too fast and cut too many corners. This technology is too important to be left to a race between Microsoft, Google, Meta and a few other firms. But no one company can slow down to a safe pace without risking irrelevancy. That’s where the government comes in — or so they hope… [A]fter talking to a lot of people working on these problems and reading through a lot of policy papers imagining solutions, there are a few categories I’d prioritize.

The first is the question — and it is a question — of interpretability. As I said above, it’s not clear that interpretability is achievable. But without it, we will be turning more and more of our society over to algorithms we do not understand… The second is security. For all the talk of an A.I. race with China, the easiest way for China — or any country for that matter, or even any hacker collective — to catch up on A.I. is to simply steal the work being done here. Any firm building A.I. systems above a certain scale should be operating with hardened cybersecurity. It’s ridiculous to block the export of advanced semiconductors to China but to simply hope that every 26-year-old engineer at OpenAI is following appropriate security measures.

The third is evaluations and audits. This is how models will be evaluated for everything from bias to the ability to scam people to the tendency to replicate themselves across the internet. Right now, the testing done to make sure large models are safe is voluntary, opaque and inconsistent. No best practices have been accepted across the industry, and not nearly enough work has been done to build testing regimes in which the public can have confidence. That needs to change — and fast.
The piece also recommends that AI-design companies “bear at least some liability for what their models.” But what legislation should we see — and what legislation will we see? “One thing regulators shouldn’t fear is imperfect rules that slow a young industry,” the piece argues.

“For once, much of that industry is desperate for someone to help slow it down.”

Read more of this story at Slashdot.