First Small Modular Nuclear Reactor Certified For Use In US

The U.S. Nuclear Regulatory Commission has certified the design for what will be the United States’ first small modular nuclear reactor. The Associated Press reports: The rule that certifies the design was published Thursday in the Federal Register. It means that companies seeking to build and operate a nuclear power plant can pick the design for a 50-megawatt, advanced light-water small modular nuclear reactor by Oregon-based NuScale Power and apply to the NRC for a license. It’s the final determination that the design is acceptable for use, so it can’t be legally challenged during the licensing process when someone applies to build and operate a nuclear power plant, NRC spokesperson Scott Burnell said Friday. The rule becomes effective in late February.

The U.S. Energy Department said the newly approved design “equips the nation with a new clean power source to help drive down” planet-warming greenhouse gas emissions. It’s the seventh nuclear reactor design cleared for use in the United States. The rest are for traditional, large, light-water reactors. Diane Hughes, NuScale’s vice president of marketing and communications, said the design certification is a historic step forward toward a clean energy future and makes the company’s VOYGR power plant a near-term deployable solution for customers. The first small modular reactor design application package included over 2 million pages of supporting materials, Hughes added. “NuScale has also applied to the NRC for approval of a larger design, at 77 megawatts per module, and the agency is checking the application for completeness before starting a full review,” adds the report.

Read more of this story at Slashdot.

IBM Top Brass Accused Again of Using Mainframes To Prop Up Watson, Cloud Sales

IBM, along with 13 of its current and former executives, has been sued by investors who claim the IT giant used mainframe sales to fraudulently prop up newer, more trendy parts of its business. The Register reports: In effect, IBM deceived the market about its progress in developing Watson, cloud technologies, and other new sources of revenue, by deliberately misclassifying the money it was making from mainframe deals, assigning that money instead to other products, it is alleged. The accusations emerged in a lawsuit [PDF] filed late last week against IBM in New York on behalf of the June E Adams Irrevocable Trust. It alleged Big Blue shifted sales by its “near-monopoly” mainframe business to its newer and less popular cloud, analytics, mobile, social, and security products (CAMSS), which bosses promoted as growth opportunities and designated “Strategic Imperatives.”

IBM is said to have created the appearance of demand for these Strategic Imperative products by bundling them into three- to five-year mainframe Enterprise License Agreements (ELA) with large banking, healthcare, and insurance company customers. In other words, it is claimed, mainframe sales agreements had Strategic Imperative products tacked on to help boost the sales performance of those newer offerings and give investors the impression customers were clamoring for those technologies from IBM. “Defendants used steep discounting on the mainframe part of the ELA in return for the customer purchasing catalog software (i.e. Strategic Imperative Revenue), unneeded and unused by the customer,” the lawsuit stated.

IBM is also alleged to have shifted revenue from its non-strategic Global Business Services (GBS) segment to Watson, a Strategic Imperative in the CAMSS product set, to convince investors that the company was successfully expanding beyond its legacy business. Last April the plaintiff Trust filed a similar case, which was joined by at least five other law firms representing other IBM shareholders. A month prior, the IBM board had been presented with a demand letter from shareholders to investigate the above allegations. Asked whether any action has been taken as a result of that letter, IBM has yet to respond.

Read more of this story at Slashdot.

CNET Pauses Publishing AI-Written Stories After Disclosure Controversy

CNET will pause publication of stories generated using artificial intelligence “for now,” the site’s leadership told employees on a staff call Friday. The Verge reports: The call, which lasted under an hour, was held a week after CNET came under fire for its use of AI tools on stories and one day after The Verge reported that AI tools had been in use for months, with little transparency to readers or staff. CNET hadn’t formally announced the use of AI until readers noticed a small disclosure. “We didn’t do it in secret,” CNET editor-in-chief Connie Guglielmo told the group. “We did it quietly.” CNET, owned by private equity firm Red Ventures, is among several websites that have been publishing articles written using AI. Other sites like Bankrate and CreditCards.com would also pause AI stories, executives on the call said.

The call was hosted by Guglielmo, Lindsey Turrentine, CNET’s EVP of content and audience, and Lance Davis, Red Ventures’ vice president of content. They answered a handful of questions submitted by staff ahead of time in the AMA-style call. Davis, who was listed as the point of contact for CNET’s AI stories until recently, also gave staff a more detailed rundown of the tool that has been utilized for the robot-written articles. Until now, most staff had very little insight into the machine that was generating dozens of stories appearing on CNET.

The AI, which is as of yet unnamed, is a proprietary tool built by Red Ventures, according to Davis. AI editors are able to choose domains and domain-level sections from which to pull data from and generate stories; editors can also use a combination of AI-generated text and their own writing or reporting. Turrentine declined to answer staff questions about the dataset used to train AI in today’s meeting as well as around plagiarism concerns but said more information would be available next week and that some staff would get a preview of the tool.

Read more of this story at Slashdot.

D&D Will Move To a Creative Commons License, Requests Feedback On a New OGL

A new draft of the Dungeons & Dragons Open Gaming License, dubbed OGL 1.2 by publisher Wizards of the Coast, is now available for download. Polygon reports: The announcement was made Thursday by Kyle Brink, executive producer of D&D, on the D&D Beyond website. According to Wizards, this draft could place the OGL outside of the publisher’s control — which should sound good to fans enraged by recent events. Time will tell, but public comment will be accepted beginning Jan. 20 and will continue through Feb. 3. […] Creative Commons is a nonprofit organization that, by its own description, “helps overcome legal obstacles to the sharing of knowledge and creativity to address the world’s most pressing challenges.” As such, a Creative Commons license once enacted could ultimately put the OGL 1.2 outside of Wizards’ control in perpetuity.

“We’re giving the core D&D mechanics to the community through a Creative Commons license, which means that they are fully in your hands,” Brink said in the blog post. “If you want to use quintessentially D&D content from the SRD such as owlbears and magic missile, OGL 1.2 will provide you a perpetual, irrevocable license to do so.” So much trust has been lost over the last several weeks that it will no doubt take a while for legal experts — armchair and otherwise — to pour over the details of the new OGL. These are the bullet points that Wizards is promoting in this official statement: – Protecting D&D’s inclusive play experience. As I said above, content more clearly associated with D&D (like the classes, spells, and monsters) is what falls under the OGL. You’ll see that OGL 1.2 lets us act when offensive or hurtful content is published using the covered D&D stuff. We want an inclusive, safe play experience for everyone. This is deeply important to us, and OGL 1.0a didn’t give us any ability to ensure it

– TTRPGs and VTTs. OGL 1.2 will only apply to TTRPG content, whether published as books, as electronic publications, or on virtual tabletops (VTTs). Nobody needs to wonder or worry if it applies to anything else. It doesn’t.

– Deauthorizing OGL 1.0a. We know this is a big concern. The Creative Commons license and the open terms of 1.2 are intended to help with that. One key reason why we have to deauthorize: We can’t use the protective options in 1.2 if someone can just choose to publish harmful, discriminatory, or illegal content under 1.0a. And again, any content you have already published under OGL 1.0a will still always be licensed under OGL 1.0a.

– Very limited license changes allowed. Only two sections can be changed once OGL 1.2 is live: how you cite Wizards in your work and how we can contact each other. We don’t know what the future holds or what technologies we will use to communicate with each other, so we thought these two sections needed to be future-proofed. A revised version of this draft will be presented to the community again “on or before February 17.”
“The process will extend as long as it needs to,” Brink said. “We’ll keep iterating and getting your feedback until we get it right.”

Read more of this story at Slashdot.

The Lights Have Been On At a Massachusetts School For Over a Year Because No One Can Turn Them Off

An anonymous reader quotes a report from NBC News: For nearly a year and a half, a Massachusetts high school has been lit up around the clock because the district can’t turn off the roughly 7,000 lights in the sprawling building. The lighting system was installed at Minnechaug Regional High School when it was built over a decade ago and was intended to save money and energy. But ever since the software that runs it failed on Aug. 24, 2021, the lights in the Springfield suburbs school have been on continuously, costing taxpayers a small fortune.

“We are very much aware this is costing taxpayers a significant amount of money,” Aaron Osborne, the assistant superintendent of finance at the Hampden-Wilbraham Regional School District, told NBC News. “And we have been doing everything we can to get this problem solved.” Osborne said it’s difficult to say how much money it’s costing because during the pandemic and in its aftermath, energy costs have fluctuated wildly. “I would say the net impact is in the thousands of dollars per month on average, but not in the tens of thousands,” Osborne said. That, in part, is because the high school uses highly efficient fluorescent and LED bulbs, he said. And, when possible, teachers have manually removed bulbs from fixtures in classrooms while staffers have shut off breakers not connected to the main system to douse some of the exterior lights.

But there’s hope on the horizon that the lights at Minnechaug will soon be dimmed. Paul Mustone, president of the Reflex Lighting Group, said the parts they need to replace the system at the school have finally arrived from the factory in China and they expect to do the installation over the February break. “And yes, there will be a remote override switch so this won’t happen again,” said Mustone, whose company has been in business for more than 40 years.

Read more of this story at Slashdot.

Android 13 Is Running On 5.2% of All Devices Five Months After Launch

According to the latest official Android distribution numbers from Google, Android 13 is running on 5.2% of all devices less than six months after launch. 9to5Google reports: According to Android Studio, devices running Android 13 now account for 5.2% of all devices. Meanwhile Android 12 and 12L now account for 18.9% of the total, a significant increase from August’s 13.5% figure. Notably, while Google’s chart does include details about Android 13, it doesn’t make a distinction between Android 12 and 12L. Looking at the older versions, we see that usage of Android Oreo has finally dropped below 10%, with similar drops in percentage down the line. Android Jelly Bean, which previously weighed in at 0.3%, is no longer listed, while KitKat has dropped from 0.9% to 0.7%. Android 13’s 5.2% distribution number “is better than it sounds,” writes Ryan Whitwam via ExtremeTech: These numbers show an accelerating pickup for Google’s new platform versions. If you look back at stats from the era of Android KitKat and Lollipop, the latest version would only have a fraction of this usage share after half a year. That’s because the only phones running the new software would be Google’s Nexus phones, plus maybe one or two new devices from OEMs that worked with Google to deploy the latest software as a marketing gimmick.

The improvements are thanks largely to structural changes in how Android is developed and deployed. For example, Project Treble was launched in 2017 to re-architect the platform, separating the OS framework from the low-level vendor code. This made it easier to update devices without waiting on vendors to provide updated drivers. We saw evidence of improvement that very year, and it’s gotten better ever since.

Read more of this story at Slashdot.

iOS 16.3 Expands Advanced Data Protection Option For iCloud Encryption Globally

Apple today announced that Advanced Data Protection is expanding beyond the United States. MacRumors reports: Starting with iOS 16.3, the security feature will be available globally, giving users to option to enable end-to-end encryption for many additional iCloud data categories, including Photos, Notes, Voice Memos, Messages backups, device backups, and more. iOS 16.3 is currently in beta and expected to be released to the public next week.

By default, Apple stores encryption keys for some iCloud data types on its servers to ensure that users can recover their data if they lose access to their Apple ID account. If a user enables Advanced Data Protection, the encryption keys are deleted from Apple’s servers and stored on a user’s devices only, preventing Apple, law enforcement, or anyone else from accessing the data, even if iCloud servers were to be breached.

iCloud already provides end-to-end encryption for 14 data categories without Advanced Data Protection turned on, including Messages (excluding backups), passwords stored in iCloud Keychain, Health data, Apple Maps search history, Apple Card transactions, and more. Advanced Data Protection expands this protection to the vast majority of iCloud categories, with major exceptions including the Mail, Contacts, and Calendar apps. For more information, you can read Apple’s Advanced Data Protection support document.

Read more of this story at Slashdot.

Stephen Colbert To Produce TV Series Based On Roger Zelanzny’s Sci-Fi Novels ‘The Chronicles of Amber’

Stephen Colbert is joining the team that is adapting Roger Zelazny’s “The Chronicles of Amber” for television. Variety reports: Colbert will now executive produce the potential series under his Spartina production banner. Spartina joins Skybound Entertainment and Vincent Newman Entertainment (VNE) on the series version of the beloved fantasy novels, with Skyboudn first announcing their intention to develop the series back in 2016. The books have been cited as an influence on “Game of Thrones,” with author George R.R. Martin recently stating he wanted to see the books brought to the screen.

“The Chronicles of Amber” follows the story of Corwin, who is said to “awaken on Earth with no memory, but soon finds he is a prince of a royal family that has the ability to travel through different dimensions of reality (called ‘shadows’) and rules over the one true world, Amber.” The story is told over ten books with two story arcs: “The Corwin Cycle” and “The Merlin Cycle.” The series has sold more than fifteen million copies globally. The search is currently on for a writer to tackle the adaptation. No network or streamer is currently attached. Colbert and Spartina are currently under a first-look deal at CBS Studios, but they are not currently the studio behind the series. “George R.R. Martin and I have similar dreams,” Colbert said. “I’ve carried the story of Corwin in my head for over 40 years, and I’m thrilled to partner with Skybound and Vincent Newman to bring these worlds to life. All roads lead to Amber, and I’m happy to be walking them.”

Read more of this story at Slashdot.

Adobe Says It Isn’t Using Your Photos To Train AI Image Generators

In early January, Adobe came under fire for language used in its terms and conditions that seemed to indicate that it could use photographers’ photos to train generative artificial intelligence systems. The company has reiterated that this is not the case. PetaPixel reports: The language of its “Content analysis” section in its Privacy and Personal Data settings says that by default, users give Adobe permission to “analyze content using techniques such as machine learning (e.g., for pattern recognition) to develop and improve our products and services.” That sounded a lot like artificial intelligence-based (AI) image generators. One of the sticking points of this particular section is that Adobe makes it an opt-out, not an opt-in, so many photographers likely had no idea they were already agreeing to it. “Machine learning-enabled features can help you become more efficient and creative,” Adobe explains. “For example, we may use machine learning-enabled features to help you organize and edit your images more quickly and accurately. With object recognition in Lightroom, we can auto-tag photos of your dog or cat.”

When pressed for comment in PetaPixel’s original coverage on January 5, Adobe didn’t immediately respond leaving many to assume the worst. However, a day later, the company did provide some clarity on the issue to PetaPixel that some photographers may have missed. “We give customers full control of their privacy preferences and settings. The policy in discussion is not new and has been in place for a decade to help us enhance our products for customers. For anyone who prefers their content be excluded from the analysis, we offer that option here,” a spokesperson from Adobe’s public affairs office told PetaPixel. “When it comes to Generative AI, Adobe does not use any data stored on customers’ Creative Cloud accounts to train its experimental Generative AI features. We are currently reviewing our policy to better define Generative AI use cases.” In an interview with Bloomberg, Adobe Chief Product Officer Scott Belsky said: “We are rolling out a new evolution of this policy that is more specific. If we ever allow people to opt-in for generative AI specifically, we need to call it out and explain how we’re using it.”

“We have to be very explicit about these things.”

Read more of this story at Slashdot.