Google’s Claims of Super-Human AI Chip Layout Back Under the Microscope

A Google-led research paper published in Nature, claiming machine-learning software can design better chips faster than humans, has been called into question after a new study disputed its results. The Register reports: In June 2021, Google made headlines for developing a reinforcement-learning-based system capable of automatically generating optimized microchip floorplans. These plans determine the arrangement of blocks of electronic circuitry within the chip: where things such as the CPU and GPU cores, and memory and peripheral controllers, actually sit on the physical silicon die. Google said it was using this AI software to design its homegrown TPU chips that accelerate AI workloads: it was employing machine learning to make its other machine-learning systems run faster. The research got the attention of the electronic design automation community, which was already moving toward incorporating machine-learning algorithms into their software suites. Now Google’s claims of its better-than-humans model has been challenged by a team at the University of California, San Diego (UCSD).

Led by Andrew Kahng, a professor of computer science and engineering, that group spent months reverse engineering the floorplanning pipeline Google described in Nature. The web giant withheld some details of its model’s inner workings, citing commercial sensitivity, so the UCSD had to figure out how to make their own complete version to verify the Googlers’ findings. Prof Kahng, we note, served as a reviewer for Nature during the peer-review process of Google’s paper. The university academics ultimately found their own recreation of the original Google code, referred to as circuit training (CT) in their study, actually performed worse than humans using traditional industry methods and tools.

What could have caused this discrepancy? One might say the recreation was incomplete, though there may be another explanation. Over time, the UCSD team learned Google had used commercial software developed by Synopsys, a major maker of electronic design automation (EDA) suites, to create a starting arrangement of the chip’s logic gates that the web giant’s reinforcement learning system then optimized. The Google paper did mention that industry-standard software tools and manual tweaking were used after the model had generated a layout, primarily to ensure the processor would work as intended and finalize it for fabrication. The Googlers argued this was a necessary step whether the floorplan was created by a machine-learning algorithm or by humans with standard tools, and thus its model deserved credit for the optimized end product. However, the UCSD team said there was no mention in the Nature paper of EDA tools being used beforehand to prepare a layout for the model to iterate over. It’s argued these Synopsys tools may have given the model a decent enough head start that the AI system’s true capabilities should be called into question.

The lead authors of Google’s paper, Azalia Mirhoseini and Anna Goldie, said the UCSD team’s work isn’t an accurate implementation of their method. They pointed out (PDF) that Prof Kahng’s group obtained worse results since they didn’t pre-train their model on any data at all. Prof Kahng’s team also did not train their system using the same amount of computing power as Google used, and suggested this step may not have been carried out properly, crippling the model’s performance. Mirhoseini and Goldie also said the pre-processing step using EDA applications that was not explicitly described in their Nature paper wasn’t important enough to mention. The UCSD group, however, said they didn’t pre-train their model because they didn’t have access to the Google proprietary data. They claimed, however, their software had been verified by two other engineers at the internet giant, who were also listed as co-authors of the Nature paper. Separately, a fired Google AI researcher claims the internet goliath’s research paper was “done in context of a large potential Cloud deal” worth $120 million at the time.

Read more of this story at Slashdot.

Bill Gates Predicts ‘The Age of AI Has Begun’

Bill Gates calls the invention of AI “as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone,” predicting “Entire industries will reorient around it” in an essay titled “The AI Age has Begun.”
In my lifetime, I’ve seen two demonstrations of technology that struck me as revolutionary. The first time was in 1980, when I was introduced to a graphical user interface — the forerunner of every modern operating system, including Windows…. The second big surprise came just last year. I’d been meeting with the team from OpenAI since 2016 and was impressed by their steady progress. In mid-2022, I was so excited about their work that I gave them a challenge: train an artificial intelligence to pass an Advanced Placement biology exam. Make it capable of answering questions that it hasn’t been specifically trained for. (I picked AP Bio because the test is more than a simple regurgitation of scientific facts — it asks you to think critically about biology.) If you can do that, I said, then you’ll have made a true breakthrough.

I thought the challenge would keep them busy for two or three years. They finished it in just a few months. In September, when I met with them again, I watched in awe as they asked GPT, their AI model, 60 multiple-choice questions from the AP Bio exam — and it got 59 of them right. Then it wrote outstanding answers to six open-ended questions from the exam. We had an outside expert score the test, and GPT got a 5 — the highest possible score, and the equivalent to getting an A or A+ in a college-level biology course. Once it had aced the test, we asked it a non-scientific question: “What do you say to a father with a sick child?” It wrote a thoughtful answer that was probably better than most of us in the room would have given. The whole experience was stunning.

I knew I had just seen the most important advance in technology since the graphical user interface.

Some predictions from Gates:

“Eventually your main way of controlling a computer will no longer be pointing and clicking or tapping on menus and dialogue boxes. Instead, you’ll be able to write a request in plain English….”

“Advances in AI will enable the creation of a personal agent… It will see your latest emails, know about the meetings you attend, read what you read, and read the things you don’t want to bother with.”

“I think in the next five to 10 years, AI-driven software will finally deliver on the promise of revolutionizing the way people teach and learn. It will know your interests and your learning style so it can tailor content that will keep you engaged. It will measure your understanding, notice when you’re losing interest, and understand what kind of motivation you respond to. It will give immediate feedback.”

“AIs will dramatically accelerate the rate of medical breakthroughs. The amount of data in biology is very large, and it’s hard for humans to keep track of all the ways that complex biological systems work. There is already software that can look at this data, infer what the pathways are, search for targets on pathogens, and design drugs accordingly. Some companies are working on cancer drugs that were developed this way.”

AI will “help health-care workers make the most of their time by taking care of certain tasks for them — things like filing insurance claims, dealing with paperwork, and drafting notes from a doctor’s visit. I expect that there will be a lot of innovation in this area…. AIs will even give patients the ability to do basic triage, get advice about how to deal with health problems, and decide whether they need to seek treatment.”

Read more of this story at Slashdot.

Some Apple Employees Fear Its $3,000 Mixed-Reality Headset Could Flop

An anonymous reader shares this report from AppleInsider:
Apple has allegedly demonstrated its mixed reality headset to its top executives recently, in an attempt to generate excitement for the upcoming platform launch. While executives are keen on the product, others within Apple are not sure it’s a home run hit. Eight anonymous current and former employees told the New York Times that they are skeptical about the headset, despite Apple’s apparent glossy demonstration of the technology.

Manufacturing has already begun for a June release of the $3,000 headset, insiders say in the Times’ article:

Some employees have defected from the project because of their doubts about its potential, three people with knowledge of the moves said. Others have been fired over the lack of progress with some aspects of the headset, including its use of Apple’s Siri voice assistant, one person said.Even leaders at Apple have questioned the product’s prospects. It has been developed at a time when morale has been strained by a wave of departures from the company’s design team, including Mr. Ive, who left Apple in 2019 and stopped advising the company last year….

Because the headset won’t fit over glasses, the company has plans to sell prescription lenses for the displays to people who don’t wear contacts, a person familiar with the plan said. During the device’s development, Apple has focused on making it excel for videoconferencing and spending time with others as avatars in a virtual world. The company has called the device’s signature application “copresence,” a word designed to capture the experience of sharing a real or virtual space with someone in another place. It is akin to what Mark Zuckerberg, Facebook’s founder, calls the “metaverse….”

But the road to deliver augmented reality has been littered with failures, false starts and disappointments, from Google Glass to Magic Leap and from Microsoft’s HoloLens to Meta’s Quest Pro. Apple is considered a potential savior because of its success combining new hardware and software to create revolutionary devices.
Still, the challenges are daunting.

Read more of this story at Slashdot.

IBM Installs World’s First Quantum Computer for Accelerating Healthcare Research

It’s one of America’s best hospitals — a nonprofit “academic medical center” called the Cleveland Clinic. And this week it installed an IBM-managed quantum computer to accelerate healthcare research (according to an announcement from IBM). IBM is calling it “the first quantum computer in the world to be uniquely dedicated to healthcare research.”

The clinic’s CEO said the technology “holds tremendous promise in revolutionizing healthcare and expediting progress toward new cares, cures and solutions for patients.” IBM’s CEO added that “By combining the power of quantum computing, artificial intelligence and other next-generation technologies with Cleveland Clinic’s world-renowned leadership in healthcare and life sciences, we hope to ignite a new era of accelerated discovery.”

em>Inside HPC points out that “IBM Quantum System One” is part of a larger biomedical research program applying high-performance computing, AI, and quantum computing, with IBM and the Cleveland Clinic “collaborating closely on a robust portfolio of projects with these advanced technologies to generate and analyze massive amounts of data to enhance research.”
The Cleveland Clinic-IBM Discovery Accelerator has generated multiple projects that leverage the latest in quantum computing, AI and hybrid cloud to help expedite discoveries in biomedical research. These include:

– Development of quantum computing pipelines to screen and optimize drugs targeted to specific proteins;
– Improvement of a quantum-enhanced prediction model for cardiovascular risk following non-cardiac surgery;
– Application of artificial intelligence to search genome sequencing findings and large drug-target databases to find effective, existing drugs that could help patients with Alzheimer’s and other diseases.

The Discovery Accelerator also serves as the technology foundation for Cleveland Clinic’s Global Center for Pathogen & Human Health Research, part of the Cleveland Innovation District. The center, supported by a $500 million investment from the State of Ohio, Jobs Ohio and Cleveland Clinic, brings together a team focused on studying, preparing and protecting against emerging pathogens and virus-related diseases. Through the Discovery Accelerator, researchers are leveraging advanced computational technology to expedite critical research into treatments and vaccines.

Read more of this story at Slashdot.

Google Security Researchers Accuse CentOS of Failing to Backport Kernel Fixes

An anonymous reader quotes Neowin:
Google Project Zero is a security team responsible for discovering security flaws in Google’s own products as well as software developed by other vendors. Following discovery, the issues are privately reported to vendors and they are given 90 days to fix the reported problems before they are disclosed publicly…. Now, the security team has reported several flaws in CentOS’ kernel.

As detailed in the technical document here, Google Project Zero’s security researcher Jann Horn learned that kernel fixes made to stable trees are not backported to many enterprise versions of Linux. To validate this hypothesis, Horn compared the CentOS Stream 9 kernel to the stable linux-5.15.y stable tree…. As expected, it turned out that several kernel fixes have not been made deployed in older, but supported versions of CentOS Stream/RHEL. Horn further noted that for this case, Project Zero is giving a 90-day deadline to release a fix, but in the future, it may allot even stricter deadlines for missing backports….

Red Hat accepted all three bugs reported by Horn and assigned them CVE numbers. However, the company failed to fix these issues in the allotted 90-day timeline, and as such, these vulnerabilities are being made public by Google Project Zero.
Horn is urging better patch scheduling so “an attacker who wants to quickly find a nice memory corruption bug in CentOS/RHEL can’t just find such bugs in the delta between upstream stable and your kernel.”

Read more of this story at Slashdot.

The New US-China Proxy War Over Undersea Internet Cables

400 undersea cables carry 95% of the world’s international internet traffic, reports Reuters (citing figures from Washington-based telecommunications research firm TeleGeography).

But now there’s “a growing proxy war between the United States and China over technologies that could determine who achieves economic and military dominance for decades to come.”

In February, American subsea cable company SubCom LLC began laying a $600-million cable to transport data from Asia to Europe, via Africa and the Middle East, at super-fast speeds over 12,000 miles of fiber running along the seafloor. That cable is known as South East Asia-Middle East-Western Europe 6, or SeaMeWe-6 for short. It will connect a dozen countries as it snakes its way from Singapore to France, crossing three seas and the Indian Ocean on the way. It is slated to be finished in 2025.

It was a project that slipped through China’s fingers….

The Singapore-to-France cable would have been HMN Tech’s biggest such project to date, cementing it as the world’s fastest-rising subsea cable builder, and extending the global reach of the three Chinese telecom firms that had intended to invest in it. But the U.S. government, concerned about the potential for Chinese spying on these sensitive communications cables, ran a successful campaign to flip the contract to SubCom through incentives and pressure on consortium members…. It’s one of at least six private undersea cable deals in the Asia-Pacific region over the past four years where the U.S. government either intervened to keep HMN Tech from winning that business, or forced the rerouting or abandonment of cables that would have directly linked U.S. and Chinese territories….

Justin Sherman, a fellow at the Cyber Statecraft Initiative of the Atlantic Council, a Washington-based think tank, told Reuters that undersea cables were “a surveillance gold mine” for the world’s intelligence agencies. “When we talk about U.S.-China tech competition, when we talk about espionage and the capture of data, submarine cables are involved in every aspect of those rising geopolitical tensions,” Sherman said.

Read more of this story at Slashdot.

Huawei Claims To Have Built Its Own 14nm Chip Design Suite

Huawei has reportedly completed work on electronic design automation (EDA) tools for laying out and making chips down to 14nm process nodes. The Register reports: Chinese media said the platform is one of 78 being developed by the telecoms equipment giant to replace American and European chip design toolkits that have become subject to export controls by the US and others. Huawei’s EDA platform was reportedly revealed by rotating Chairman Xu Zhijun during a meeting in February, and later confirmed by media in China. […] Huawei’s focus on EDA software for 14nm and larger chips reflects the current state of China’s semiconductor industry. State-backed foundry operator SMIC currently possesses the ability to produce 14nm chips at scale, although there have been some reports the company has had success developing a 7nm process node.

Today, the EDA market is largely controlled by three companies: California-based Synopsys and Cadence, as well as Germany’s Siemens. According to the industry watchers at TrendForce, these three companies account for roughly 75 percent of the EDA market. And this poses a problem for Chinese chipmakers and foundries, which have steadily found themselves cut off from these tools. Synopsys and Cadence’s EDA tech is already subject to several of these export controls, which were stiffened by the US Commerce Department last summer to include state-of-the-art gate-all-around (GAA) transistors. This January, the White House also reportedly stopped issuing export licenses to companies supplying the likes of Huawei.

This is particularly troublesome for Huawei, foundry operator SMIC, and memory vendor YMTC to name a few on the US Entity List, a roster of companies Uncle Sam would prefer you not to do business with. It leaves them unable to access recent and latest technologies, at the very least. So the development of a homegrown EDA platform for 14nm chips serves as insurance in case broader access to Western production platforms is cut off entirely.

Read more of this story at Slashdot.

Intel Co-Founder/Creator of ‘Moore’s Law’ Gordon Moore Dies at Age 94

Intel announced Friday that Gordon Moore, Intel’s co-founder, has died at the age of 94:

Moore and his longtime colleague Robert Noyce founded Intel in July 1968. Moore initially served as executive vice president until 1975, when he became president. In 1979, Moore was named chairman of the board and chief executive officer, posts he held until 1987, when he gave up the CEO position and continued as chairman. In 1997, Moore became chairman emeritus, stepping down in 2006.

During his lifetime, Moore also dedicated his focus and energy to philanthropy, particularly environmental conservation, science and patient care improvements. Along with his wife of 72 years, he established the Gordon and Betty Moore Foundation, which has donated more than $5.1 billion to charitable causes since its founding in 2000….

“Though he never aspired to be a household name, Gordon’s vision and his life’s work enabled the phenomenal innovation and technological developments that shape our everyday lives,” said foundation president Harvey Fineberg. “Yet those historic achievements are only part of his legacy. His and Betty’s generosity as philanthropists will shape the world for generations to come.”

Pat Gelsinger, Intel CEO, said, “Gordon Moore defined the technology industry through his insight and vision. He was instrumental in revealing the power of transistors, and inspired technologists and entrepreneurs across the decades. We at Intel remain inspired by Moore’s Law and intend to pursue it until the periodic table is exhausted….”

Prior to establishing Intel, Moore and Noyce participated in the founding of Fairchild Semiconductor, where they played central roles in the first commercial production of diffused silicon transistors and later the world’s first commercially viable integrated circuits. The two had previously worked together under William Shockley, the co-inventor of the transistor and founder of Shockley Semiconductor, which was the first semiconductor company established in what would become Silicon Valley.

Read more of this story at Slashdot.