Google’s Claims of Super-Human AI Chip Layout Back Under the Microscope
Led by Andrew Kahng, a professor of computer science and engineering, that group spent months reverse engineering the floorplanning pipeline Google described in Nature. The web giant withheld some details of its model’s inner workings, citing commercial sensitivity, so the UCSD had to figure out how to make their own complete version to verify the Googlers’ findings. Prof Kahng, we note, served as a reviewer for Nature during the peer-review process of Google’s paper. The university academics ultimately found their own recreation of the original Google code, referred to as circuit training (CT) in their study, actually performed worse than humans using traditional industry methods and tools.
What could have caused this discrepancy? One might say the recreation was incomplete, though there may be another explanation. Over time, the UCSD team learned Google had used commercial software developed by Synopsys, a major maker of electronic design automation (EDA) suites, to create a starting arrangement of the chip’s logic gates that the web giant’s reinforcement learning system then optimized. The Google paper did mention that industry-standard software tools and manual tweaking were used after the model had generated a layout, primarily to ensure the processor would work as intended and finalize it for fabrication. The Googlers argued this was a necessary step whether the floorplan was created by a machine-learning algorithm or by humans with standard tools, and thus its model deserved credit for the optimized end product. However, the UCSD team said there was no mention in the Nature paper of EDA tools being used beforehand to prepare a layout for the model to iterate over. It’s argued these Synopsys tools may have given the model a decent enough head start that the AI system’s true capabilities should be called into question.
The lead authors of Google’s paper, Azalia Mirhoseini and Anna Goldie, said the UCSD team’s work isn’t an accurate implementation of their method. They pointed out (PDF) that Prof Kahng’s group obtained worse results since they didn’t pre-train their model on any data at all. Prof Kahng’s team also did not train their system using the same amount of computing power as Google used, and suggested this step may not have been carried out properly, crippling the model’s performance. Mirhoseini and Goldie also said the pre-processing step using EDA applications that was not explicitly described in their Nature paper wasn’t important enough to mention. The UCSD group, however, said they didn’t pre-train their model because they didn’t have access to the Google proprietary data. They claimed, however, their software had been verified by two other engineers at the internet giant, who were also listed as co-authors of the Nature paper. Separately, a fired Google AI researcher claims the internet goliath’s research paper was “done in context of a large potential Cloud deal” worth $120 million at the time.
Read more of this story at Slashdot.