Google’s Claims of Super-Human AI Chip Layout Back Under the Microscope

A Google-led research paper published in Nature, claiming machine-learning software can design better chips faster than humans, has been called into question after a new study disputed its results. The Register reports: In June 2021, Google made headlines for developing a reinforcement-learning-based system capable of automatically generating optimized microchip floorplans. These plans determine the arrangement of blocks of electronic circuitry within the chip: where things such as the CPU and GPU cores, and memory and peripheral controllers, actually sit on the physical silicon die. Google said it was using this AI software to design its homegrown TPU chips that accelerate AI workloads: it was employing machine learning to make its other machine-learning systems run faster. The research got the attention of the electronic design automation community, which was already moving toward incorporating machine-learning algorithms into their software suites. Now Google’s claims of its better-than-humans model has been challenged by a team at the University of California, San Diego (UCSD).

Led by Andrew Kahng, a professor of computer science and engineering, that group spent months reverse engineering the floorplanning pipeline Google described in Nature. The web giant withheld some details of its model’s inner workings, citing commercial sensitivity, so the UCSD had to figure out how to make their own complete version to verify the Googlers’ findings. Prof Kahng, we note, served as a reviewer for Nature during the peer-review process of Google’s paper. The university academics ultimately found their own recreation of the original Google code, referred to as circuit training (CT) in their study, actually performed worse than humans using traditional industry methods and tools.

What could have caused this discrepancy? One might say the recreation was incomplete, though there may be another explanation. Over time, the UCSD team learned Google had used commercial software developed by Synopsys, a major maker of electronic design automation (EDA) suites, to create a starting arrangement of the chip’s logic gates that the web giant’s reinforcement learning system then optimized. The Google paper did mention that industry-standard software tools and manual tweaking were used after the model had generated a layout, primarily to ensure the processor would work as intended and finalize it for fabrication. The Googlers argued this was a necessary step whether the floorplan was created by a machine-learning algorithm or by humans with standard tools, and thus its model deserved credit for the optimized end product. However, the UCSD team said there was no mention in the Nature paper of EDA tools being used beforehand to prepare a layout for the model to iterate over. It’s argued these Synopsys tools may have given the model a decent enough head start that the AI system’s true capabilities should be called into question.

The lead authors of Google’s paper, Azalia Mirhoseini and Anna Goldie, said the UCSD team’s work isn’t an accurate implementation of their method. They pointed out (PDF) that Prof Kahng’s group obtained worse results since they didn’t pre-train their model on any data at all. Prof Kahng’s team also did not train their system using the same amount of computing power as Google used, and suggested this step may not have been carried out properly, crippling the model’s performance. Mirhoseini and Goldie also said the pre-processing step using EDA applications that was not explicitly described in their Nature paper wasn’t important enough to mention. The UCSD group, however, said they didn’t pre-train their model because they didn’t have access to the Google proprietary data. They claimed, however, their software had been verified by two other engineers at the internet giant, who were also listed as co-authors of the Nature paper. Separately, a fired Google AI researcher claims the internet goliath’s research paper was “done in context of a large potential Cloud deal” worth $120 million at the time.

Read more of this story at Slashdot.

Bill Gates Predicts ‘The Age of AI Has Begun’

Bill Gates calls the invention of AI “as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone,” predicting “Entire industries will reorient around it” in an essay titled “The AI Age has Begun.”
In my lifetime, I’ve seen two demonstrations of technology that struck me as revolutionary. The first time was in 1980, when I was introduced to a graphical user interface — the forerunner of every modern operating system, including Windows…. The second big surprise came just last year. I’d been meeting with the team from OpenAI since 2016 and was impressed by their steady progress. In mid-2022, I was so excited about their work that I gave them a challenge: train an artificial intelligence to pass an Advanced Placement biology exam. Make it capable of answering questions that it hasn’t been specifically trained for. (I picked AP Bio because the test is more than a simple regurgitation of scientific facts — it asks you to think critically about biology.) If you can do that, I said, then you’ll have made a true breakthrough.

I thought the challenge would keep them busy for two or three years. They finished it in just a few months. In September, when I met with them again, I watched in awe as they asked GPT, their AI model, 60 multiple-choice questions from the AP Bio exam — and it got 59 of them right. Then it wrote outstanding answers to six open-ended questions from the exam. We had an outside expert score the test, and GPT got a 5 — the highest possible score, and the equivalent to getting an A or A+ in a college-level biology course. Once it had aced the test, we asked it a non-scientific question: “What do you say to a father with a sick child?” It wrote a thoughtful answer that was probably better than most of us in the room would have given. The whole experience was stunning.

I knew I had just seen the most important advance in technology since the graphical user interface.

Some predictions from Gates:

“Eventually your main way of controlling a computer will no longer be pointing and clicking or tapping on menus and dialogue boxes. Instead, you’ll be able to write a request in plain English….”

“Advances in AI will enable the creation of a personal agent… It will see your latest emails, know about the meetings you attend, read what you read, and read the things you don’t want to bother with.”

“I think in the next five to 10 years, AI-driven software will finally deliver on the promise of revolutionizing the way people teach and learn. It will know your interests and your learning style so it can tailor content that will keep you engaged. It will measure your understanding, notice when you’re losing interest, and understand what kind of motivation you respond to. It will give immediate feedback.”

“AIs will dramatically accelerate the rate of medical breakthroughs. The amount of data in biology is very large, and it’s hard for humans to keep track of all the ways that complex biological systems work. There is already software that can look at this data, infer what the pathways are, search for targets on pathogens, and design drugs accordingly. Some companies are working on cancer drugs that were developed this way.”

AI will “help health-care workers make the most of their time by taking care of certain tasks for them — things like filing insurance claims, dealing with paperwork, and drafting notes from a doctor’s visit. I expect that there will be a lot of innovation in this area…. AIs will even give patients the ability to do basic triage, get advice about how to deal with health problems, and decide whether they need to seek treatment.”

Read more of this story at Slashdot.

Some Apple Employees Fear Its $3,000 Mixed-Reality Headset Could Flop

An anonymous reader shares this report from AppleInsider:
Apple has allegedly demonstrated its mixed reality headset to its top executives recently, in an attempt to generate excitement for the upcoming platform launch. While executives are keen on the product, others within Apple are not sure it’s a home run hit. Eight anonymous current and former employees told the New York Times that they are skeptical about the headset, despite Apple’s apparent glossy demonstration of the technology.

Manufacturing has already begun for a June release of the $3,000 headset, insiders say in the Times’ article:

Some employees have defected from the project because of their doubts about its potential, three people with knowledge of the moves said. Others have been fired over the lack of progress with some aspects of the headset, including its use of Apple’s Siri voice assistant, one person said.Even leaders at Apple have questioned the product’s prospects. It has been developed at a time when morale has been strained by a wave of departures from the company’s design team, including Mr. Ive, who left Apple in 2019 and stopped advising the company last year….

Because the headset won’t fit over glasses, the company has plans to sell prescription lenses for the displays to people who don’t wear contacts, a person familiar with the plan said. During the device’s development, Apple has focused on making it excel for videoconferencing and spending time with others as avatars in a virtual world. The company has called the device’s signature application “copresence,” a word designed to capture the experience of sharing a real or virtual space with someone in another place. It is akin to what Mark Zuckerberg, Facebook’s founder, calls the “metaverse….”

But the road to deliver augmented reality has been littered with failures, false starts and disappointments, from Google Glass to Magic Leap and from Microsoft’s HoloLens to Meta’s Quest Pro. Apple is considered a potential savior because of its success combining new hardware and software to create revolutionary devices.
Still, the challenges are daunting.

Read more of this story at Slashdot.

IBM Installs World’s First Quantum Computer for Accelerating Healthcare Research

It’s one of America’s best hospitals — a nonprofit “academic medical center” called the Cleveland Clinic. And this week it installed an IBM-managed quantum computer to accelerate healthcare research (according to an announcement from IBM). IBM is calling it “the first quantum computer in the world to be uniquely dedicated to healthcare research.”

The clinic’s CEO said the technology “holds tremendous promise in revolutionizing healthcare and expediting progress toward new cares, cures and solutions for patients.” IBM’s CEO added that “By combining the power of quantum computing, artificial intelligence and other next-generation technologies with Cleveland Clinic’s world-renowned leadership in healthcare and life sciences, we hope to ignite a new era of accelerated discovery.”

em>Inside HPC points out that “IBM Quantum System One” is part of a larger biomedical research program applying high-performance computing, AI, and quantum computing, with IBM and the Cleveland Clinic “collaborating closely on a robust portfolio of projects with these advanced technologies to generate and analyze massive amounts of data to enhance research.”
The Cleveland Clinic-IBM Discovery Accelerator has generated multiple projects that leverage the latest in quantum computing, AI and hybrid cloud to help expedite discoveries in biomedical research. These include:

– Development of quantum computing pipelines to screen and optimize drugs targeted to specific proteins;
– Improvement of a quantum-enhanced prediction model for cardiovascular risk following non-cardiac surgery;
– Application of artificial intelligence to search genome sequencing findings and large drug-target databases to find effective, existing drugs that could help patients with Alzheimer’s and other diseases.

The Discovery Accelerator also serves as the technology foundation for Cleveland Clinic’s Global Center for Pathogen & Human Health Research, part of the Cleveland Innovation District. The center, supported by a $500 million investment from the State of Ohio, Jobs Ohio and Cleveland Clinic, brings together a team focused on studying, preparing and protecting against emerging pathogens and virus-related diseases. Through the Discovery Accelerator, researchers are leveraging advanced computational technology to expedite critical research into treatments and vaccines.

Read more of this story at Slashdot.

Google Security Researchers Accuse CentOS of Failing to Backport Kernel Fixes

An anonymous reader quotes Neowin:
Google Project Zero is a security team responsible for discovering security flaws in Google’s own products as well as software developed by other vendors. Following discovery, the issues are privately reported to vendors and they are given 90 days to fix the reported problems before they are disclosed publicly…. Now, the security team has reported several flaws in CentOS’ kernel.

As detailed in the technical document here, Google Project Zero’s security researcher Jann Horn learned that kernel fixes made to stable trees are not backported to many enterprise versions of Linux. To validate this hypothesis, Horn compared the CentOS Stream 9 kernel to the stable linux-5.15.y stable tree…. As expected, it turned out that several kernel fixes have not been made deployed in older, but supported versions of CentOS Stream/RHEL. Horn further noted that for this case, Project Zero is giving a 90-day deadline to release a fix, but in the future, it may allot even stricter deadlines for missing backports….

Red Hat accepted all three bugs reported by Horn and assigned them CVE numbers. However, the company failed to fix these issues in the allotted 90-day timeline, and as such, these vulnerabilities are being made public by Google Project Zero.
Horn is urging better patch scheduling so “an attacker who wants to quickly find a nice memory corruption bug in CentOS/RHEL can’t just find such bugs in the delta between upstream stable and your kernel.”

Read more of this story at Slashdot.