Mark Zuckerberg Assembles Team of Tech Execs For AI Advisory Council

An anonymous reader quotes a report from Quartz: Mark Zuckerberg has assembled some of his fellow tech chiefs into an advisory council to guide Meta on its artificial intelligence and product developments. The Meta Advisory Group will periodically meet with Meta’s management team, Bloomberg reported. Its members include: Stripe CEO and co-founder Patrick Collison, former GitHub CEO Nat Friedman, Shopify CEO Tobi Lutke, and former Microsoft executive and investor Charlie Songhurst.

“I’ve come to deeply respect this group of people and their achievements in their respective areas, and I’m grateful that they’re willing to share their perspectives with Meta at such an important time as we take on new opportunities with AI and the metaverse,” Zuckerberg wrote in an internal note to Meta employees, according to Bloomberg. The advisory council differs from Meta’s 11-person board of directors because its members are not elected by shareholders, nor do they have fiduciary duty to Meta, a Meta spokesperson told Bloomberg. The spokesperson said that the men will not be paid for their roles on the advisory council. TechCrunch notes that the council features “only white men on it.” This “differs from Meta’s actual board of directors and its Oversight Board, which is more diverse in gender and racial representation,” reports TechCrunch.

“It’s telling that the AI advisory council is composed entirely of businesspeople and entrepreneurs, not ethicists or anyone with an academic or deep research background. … it’s been proven time and time again that AI isn’t like other products. It’s a risky business, and the consequences of getting it wrong can be far-reaching, particularly for marginalized groups.”

Read more of this story at Slashdot.

‘Pay Researchers To Spot Errors in Published Papers’

Borrowing the idea of “bug bounties” from the technology industry could provide a systematic way to detect and correct the errors that litter the scientific literature. Malte Elson, writing at Nature: Just as many industries devote hefty funding to incentivizing people to find and report bugs and glitches, so the science community should reward the detection and correction of errors in the scientific literature. In our industry, too, the costs of undetected errors are staggering. That’s why I have joined with meta-scientist Ian Hussey at the University of Bern and psychologist Ruben Arslan at Leipzig University in Germany to pilot a bug-bounty programme for science, funded by the University of Bern. Our project, Estimating the Reliability and Robustness of Research (ERROR), pays specialists to check highly cited published papers, starting with the social and behavioural sciences (see go.nature.com/4bmlvkj). Our reviewers are paid a base rate of up to 1,000 Swiss francs (around US$1,100) for each paper they check, and a bonus for any errors they find. The bigger the error, the greater the reward — up to a maximum of 2,500 francs.

Authors who let us scrutinize their papers are compensated, too: 250 francs to cover the work needed to prepare files or answer reviewer queries, and a bonus 250 francs if no errors (or only minor ones) are found in their work. ERROR launched in February and will run for at least four years. So far, we have sent out almost 60 invitations, and 13 sets of authors have agreed to have their papers assessed. One review has been completed, revealing minor errors. I hope that the project will demonstrate the value of systematic processes to detect errors in published research. I am convinced that such systems are needed, because current checks are insufficient. Unpaid peer reviewers are overburdened, and have little incentive to painstakingly examine survey responses, comb through lists of DNA sequences or cell lines, or go through computer code line by line. Mistakes frequently slip through. And researchers have little to gain personally from sifting through published papers looking for errors. There is no financial compensation for highlighting errors, and doing so can see people marked out as troublemakers.

Read more of this story at Slashdot.

Russia Likely Launched Counter Space Weapon Into Low Earth Orbit Last Week, Pentagon Says

The United States has assessed that Russia launched what is likely a counter space weapon last week that’s now in the same orbit as a U.S. government satellite, Pentagon spokesman Maj. Gen. Pat Ryder confirmed Tuesday. From a report: “What I’m tracking here is on May 16, as you highlighted, Russia launched a satellite into low Earth orbit that we that we assess is likely a counter space weapon presumably capable of attacking other satellites in low Earth orbit,” Ryder said when questioned by ABC News about the information, which was made public earlier Tuesday by Robert Wood, deputy U.S. ambassador to the United Nations.

“Russia deployed this new counter space weapon into the same orbit as a U.S. government satellite,” Ryder continued. “And so assessments further indicate characteristics resembling previously deployed counter space payloads from 2019 and 2022.” Ryder added: “Obviously, that’s something that we’ll continue to monitor. Certainly, we would say that we have a responsibility to be ready to protect and defend the space domain and ensure continuous and uninterrupted support to the joint and combined force. And we’ll continue to balance the need to protect our interests in space with our desire to preserve a stable and sustainable space environment.” When asked if the Russian counter space weapon posed a threat to the U.S. satellite, Ryder responded: “Well, it’s a counter space weapon in the same orbit as a U.S. government satellite.”

Read more of this story at Slashdot.

The First Crew Launch of Boeing’s Starliner Capsule Is On Hold Indefinitely

Longtime Slashdot reader schwit1 shares a report from Ars Technica: The first crewed test flight of Boeing’s long-delayed Starliner spacecraft won’t take off as planned Saturday and could face a longer postponement as engineers evaluate a stubborn leak of helium from the capsule’s propulsion system. NASA announced the latest delay of the Starliner test flight late Tuesday. Officials will take more time to consider their options for how to proceed with the mission after discovering the small helium leak on the spacecraft’s service module.

The space agency did not describe what options are on the table, but sources said they range from flying the spacecraft “as is” with a thorough understanding of the leak and confidence it won’t become more significant in flight, to removing the capsule from its Atlas V rocket and taking it back to a hangar for repairs. Theoretically, the former option could permit a launch attempt as soon as next week. The latter alternative could delay the launch until at least late summer.

“The team has been in meetings for two consecutive days, assessing flight rationale, system performance, and redundancy,” NASA said in a statement Tuesday night. “There is still forward work in these areas, and the next possible launch opportunity is still being discussed. NASA will share more details once we have a clearer path forward.”

Read more of this story at Slashdot.

DOJ Makes Its First Known Arrest For AI-Generated CSAM

In what’s believed to be the first case of its kind, the U.S. Department of Justice arrested a Wisconsin man last week for generating and distributing AI-generated child sexual abuse material (CSAM). Even if no children were used to create the material, the DOJ “looks to establish a judicial precedent that exploitative materials are still illegal,” reports Engadget. From the report: The DOJ says 42-year-old software engineer Steven Anderegg of Holmen, WI, used a fork of the open-source AI image generator Stable Diffusion to make the images, which he then used to try to lure an underage boy into sexual situations. The latter will likely play a central role in the eventual trial for the four counts of “producing, distributing, and possessing obscene visual depictions of minors engaged in sexually explicit conduct and transferring obscene material to a minor under the age of 16.” The government says Anderegg’s images showed “nude or partially clothed minors lasciviously displaying or touching their genitals or engaging in sexual intercourse with men.” The DOJ claims he used specific prompts, including negative prompts (extra guidance for the AI model, telling it what not to produce) to spur the generator into making the CSAM.

Cloud-based image generators like Midjourney and DALL-E 3 have safeguards against this type of activity, but Ars Technica reports that Anderegg allegedly used Stable Diffusion 1.5, a variant with fewer boundaries. Stability AI told the publication that fork was produced by Runway ML. According to the DOJ, Anderegg communicated online with the 15-year-old boy, describing how he used the AI model to create the images. The agency says the accused sent the teen direct messages on Instagram, including several AI images of “minors lasciviously displaying their genitals.” To its credit, Instagram reported the images to the National Center for Missing and Exploited Children (NCMEC), which alerted law enforcement. Anderegg could face five to 70 years in prison if convicted on all four counts. He’s currently in federal custody before a hearing scheduled for May 22.

Read more of this story at Slashdot.

EU Sets Benchmark For Rest of the World With Landmark AI Laws

An anonymous reader quotes a report from Reuters: Europe’s landmark rules on artificial intelligence will enter into force next month after EU countries endorsed on Tuesday a political deal reached in December, setting a potential global benchmark for a technology used in business and everyday life. The European Union’s AI Act is more comprehensive than the United States’ light-touch voluntary compliance approach while China’s approach aims to maintain social stability and state control. The vote by EU countries came two months after EU lawmakers backed the AI legislation drafted by the European Commission in 2021 after making a number of key changes. […]

The AI Act imposes strict transparency obligations on high-risk AI systems while such requirements for general-purpose AI models will be lighter.
It restricts governments’ use of real-time biometric surveillance in public spaces to cases of certain crimes, prevention of terrorist attacks and searches for people suspected of the most serious crimes. The new legislation will have an impact beyond the 27-country bloc, said Patrick van Eecke at law firm Cooley. “The Act will have global reach. Companies outside the EU who use EU customer data in their AI platforms will need to comply. Other countries and regions are likely to use the AI Act as a blueprint, just as they did with the GDPR,” he said, referring to EU privacy rules.

While the new legislation will apply in 2026, bans on the use of artificial intelligence in social scoring, predictive policing and untargeted scraping of facial images from the internet or CCTV footage will kick in in six months once the new regulation enters into force. Obligations for general purpose AI models will apply after 12 months and rules for AI systems embedded into regulated products in 36 months. Fines for violations range from $8.2 million or 1.5% of turnover to 35 million euros or 7% of global turnover depending on the type of violations.

Read more of this story at Slashdot.