The Head of Instagram Agrees To Testify as Congress Probes the App’s Effects on Young People

Adam Mosseri, the head of Instagram, has agreed for the first time to testify before Congress, as bipartisan anger mounts over harms to young people from the app. From a report: Mr. Mosseri is expected to appear before a Senate panel during the week of Dec. 6 as part of a series of hearings on protecting children online, said Senator Richard Blumenthal, who will lead the hearing. Mr. Mosseri’s appearance follows hearings this year with Antigone Davis, the global head of safety for Meta, the parent company of Instagram and Facebook, and with Frances Haugen, a former employee turned whistle-blower. Ms. Haugen’s revelations about the social networking company, particularly those about Facebook and Instagram’s research into its effects on some teenagers and young girls, have spurred criticism, inquiries from politicians and investigations from regulators.

In September, Ms. Davis told Congress that the company disputed the premise that Instagram was harmful for teenagers and noted that the leaked research did not have causal data. But after Ms. Haugen’s testimony last month, Mr. Blumenthal, a Connecticut Democrat, wrote a letter to Mark Zuckerberg, the chief executive of Meta, suggesting that his company had “provided false or inaccurate testimony to me regarding attempts to internally conceal its research.” Mr. Blumenthal asked that Mr. Zuckerberg or Mr. Mosseri testify in front of the consumer protection subcommittee of the Senate’s Commerce Committee to set the record straight.

Read more of this story at Slashdot.

Can an Athlete’s Blood Enhance Brainpower?

fahrbot-bot shares a report from The New York Times: What if something in the blood of an athlete could boost the brainpower of someone who doesn’t or can’t exercise? Could a protein that gets amplified when people exercise help stave off symptoms of Alzheimer’s and other memory disorders? That’s the tantalizing prospect raised by a new study in which researchers injected sedentary mice with blood from mice that ran for miles on exercise wheels, and found that the sedentary mice then did better on tests of learning and memory. The study, published Wednesday in the journal Nature, also found that the type of brain inflammation involved in Alzheimer’s and other neurological disorders was reduced in sedentary mice after they received their athletic counterparts’ blood. Scientific results with mice don’t necessarily translate to humans. Still, experts said the study supports a growing body of research.

The study involved mice that were about three months old — roughly the equivalent of 25-to-30-year olds for humans. Some of the mice, nocturnal animals that love to run, could freely use exercise wheels in their cages and logged about four to six miles on the wheels each night. The wheels were locked for other mice that could scoot around their cages but could not get an extended cardio workout. […] After 28 days, the researchers took a third group of mice that also did not exercise and injected them with blood plasma, the liquid that surrounds blood cells, from either the runner mice or the non-runner mice. Mice receiving runner blood did better on two tests of learning and memory than those receiving blood from the non-runner mice. In one test, which measures how long a mouse will freeze in fear when it is returned to a cage where it previously received an electric foot shock, mice with runner blood froze 25 percent longer, indicating they had better memory of the stressful event […]. In the other test, mice with runner blood were twice as fast at finding a platform submerged in opaque water, he said. The team also found that the brains of mice with runner blood produced more of several types of brain cells, including those that generate new neurons in the hippocampus, a region involved in memory and spatial learning. A genetic analysis showed that about 1,950 genes had changed in response to the infusion of runner blood, becoming either more or less activated. Most of the 250 genes with the greatest activation changes were involved in inflammation and their changes suggested that brain inflammation was reduced.

Read more of this story at Slashdot.

Suicide Pods Now Legal In Switzerland, Providing Users With a Painless Death

Switzerland is giving the green light to so-called “suicide capsules” — 3-D printed pods that allow people to choose the place where they want to die an assisted death. Global News reports: The country’s medical review board announced the legalization of the Sarco Suicide Pods this week. They can be operated by the user from the inside. Dr. Philip Nitschke, the developer of the pods and founder of Exit International, a pro-euthanasia group, told SwissInfo.ch the machines can be “towed anywhere for the death” and one of the most positive features of the capsules is that they can be transported to an “idyllic outdoor setting.”

Currently, assisted suicide in Switzerland means swallowing a capsule filled with a cocktail of controlled substances that puts the person into a deep coma before they die. But Sarco pods — short for sarcophagus — allow a person to control their death inside the pod by quickly reducing internal oxygen levels. The person intending to end their life is required to answer a set of pre-recorded questions, then press a button that floods the interior with nitrogen. The oxygen level inside is quickly reduced from 21 per cent to one per cent. After death, the pod can be used as a coffin. […]

Nitschke said his method of death is painless, and the person will feel a little bit disoriented and/or euphoric before they lose consciousness. He said there are only two capsule prototypes in existence, but a third machine is being printed now, and he expects this method to become available to the Swiss public next year.

Read more of this story at Slashdot.

Facing Hostile Chinese Authorities, Apple CEO Signed $275 Billion Deal With Them

Interviews and internal Apple documents provide a behind-the-scenes look at how the company made concessions to Beijing and won key legal exemptions. CEO Tim Cook personally lobbied officials over threats that would have hobbled its devices and services. His interventions paved the way for Apple’s unparalleled success in the country. The Information: Apple’s iPhone recently became the top-selling smartphone in China, its second-biggest market after the U.S., for the first time in six years. But the company owes much of that success to CEO Tim Cook, who laid the foundation years ago by secretly signing an agreement, estimated to be worth more than $275 billion, with Chinese officials promising Apple would do its part to develop China’s economy and technological prowess through investments, business deals and worker training. Cook forged the five-year agreement, which hasn’t been previously reported, during the first of a series of in-person visits he made to the country in 2016 to quash a sudden burst of regulatory actions against Apple’s business, according to internal Apple documents viewed by The Information. Before the meetings, Apple executives were scrambling to salvage the company’s relationship with Chinese officials, who believed the company wasn’t contributing enough to the local economy, the documents show. Amid the government crackdown and the bad publicity that accompanied it, iPhone sales plummeted.

Read more of this story at Slashdot.

DeepMind Cracks ‘Knot’ Conjecture That Bedeviled Mathematicians For Decades

The artificial intelligence (AI) program DeepMind has gotten closer to proving a math conjecture that’s bedeviled mathematicians for decades and revealed another new conjecture that may unravel how mathematicians understand knots. Live Science reports: The two pure math conjectures are the first-ever important advances in pure mathematics (or math not directly linked to any non-math application) generated by artificial intelligence, the researchers reported Dec. 1 in the journal Nature. […] The first challenge was setting DeepMind onto a useful path. […] They focused on two fields: knot theory, which is the mathematical study of knots; and representation theory, which is a field that focuses on abstract algebraic structures, such as rings and lattices, and relates those abstract structures to linear algebraic equations, or the familiar equations with Xs, Ys, pluses and minuses that might be found in a high-school math class.

In understanding knots, mathematicians rely on something called invariants, which are algebraic, geometric or numerical quantities that are the same. In this case, they looked at invariants that were the same in equivalent knots; equivalence can be defined in several ways, but knots can be considered equivalent if you can distort one into another without breaking the knot. Geometric invariants are essentially measurements of a knot’s overall shape, whereas algebraic invariants describe how the knots twist in and around each other. “Up until now, there was no proven connection between those two things,” [said Alex Davies, a machine-learning specialist at DeepMind and one of the authors of the new paper], referring to geometric and algebraic invariants. But mathematicians thought there might be some kind of relationship between the two, so the researchers decided to use DeepMind to find it. With the help of the AI program, they were able to identify a new geometric measurement, which they dubbed the “natural slope” of a knot. This measurement was mathematically related to a known algebraic invariant called the signature, which describes certain surfaces on knots.

In the second case, DeepMind took a conjecture generated by mathematicians in the late 1970s and helped reveal why that conjecture works. For 40 years, mathematicians have conjectured that it’s possible to look at a specific kind of very complex, multidimensional graph and figure out a particular kind of equation to represent it. But they haven’t quite worked out how to do it. Now, DeepMind has come closer by linking specific features of the graphs to predictions about these equations, which are called Kazhdan-Lusztig (KL) polynomials, named after the mathematicians who first proposed them. “What we were able to do is train some machine-learning models that were able to predict what the polynomial was, very accurately, from the graph,” Davies said. The team also analyzed what features of the graph DeepMind was using to make those predictions, which got them closer to a general rule about how the two map to each other. This means DeepMind has made significant progress on solving this conjecture, known as the combinatorial invariance conjecture.

Read more of this story at Slashdot.