How Google’s ‘Don’t Be Evil’ Motto Has Evolved For the AI Age

In a special report for CBS News’ 60 Minutes, Google CEO Sundar Pichai shares his concerns about artificial intelligence and why the company is choosing to not release advanced models of its AI chatbot. From the report: When Google filed for its initial public offering in 2004, its founders wrote that the company’s guiding principle, “Don’t be evil” was meant to help ensure it did good things for the world, even if it had to forgo some short term gains. The phrase remains in Google’s code of conduct. Pichai told 60 Minutes he is being responsible by not releasing advanced models of Bard, in part, so society can get acclimated to the technology, and the company can develop further safety layers.

One of the things Pichai told 60 Minutes that keeps him up at night is Google’s AI technology being deployed in harmful ways. Google’s chatbot, Bard, has built in safety filters to help combat the threat of malevolent users. Pichai said the company will need to constantly update the system’s algorithms to combat disinformation campaigns and detect deepfakes, computer generated images that appear to be real. As Pichai noted in his 60 Minutes interview, consumer AI technology is in its infancy. He believes now is the right time for governments to get involved.

“There has to be regulation. You’re going to need laws … there have to be consequences for creating deep fake videos which cause harm to society,” Pichai said. “Anybody who has worked with AI for a while … realize[s] this is something so different and so deep that, we would need societal regulations to think about how to adapt.” Adaptation that is already happening around us with technology that Pichai believes, “will be more capable “anything we’ve ever seen before.” Soon it will be up to society to decide how it’s used and whether to abide by Alphabet’s code of conduct and, “Do the right thing.”

Read more of this story at Slashdot.

Google’s Claims of Super-Human AI Chip Layout Back Under the Microscope

A Google-led research paper published in Nature, claiming machine-learning software can design better chips faster than humans, has been called into question after a new study disputed its results. The Register reports: In June 2021, Google made headlines for developing a reinforcement-learning-based system capable of automatically generating optimized microchip floorplans. These plans determine the arrangement of blocks of electronic circuitry within the chip: where things such as the CPU and GPU cores, and memory and peripheral controllers, actually sit on the physical silicon die. Google said it was using this AI software to design its homegrown TPU chips that accelerate AI workloads: it was employing machine learning to make its other machine-learning systems run faster. The research got the attention of the electronic design automation community, which was already moving toward incorporating machine-learning algorithms into their software suites. Now Google’s claims of its better-than-humans model has been challenged by a team at the University of California, San Diego (UCSD).

Led by Andrew Kahng, a professor of computer science and engineering, that group spent months reverse engineering the floorplanning pipeline Google described in Nature. The web giant withheld some details of its model’s inner workings, citing commercial sensitivity, so the UCSD had to figure out how to make their own complete version to verify the Googlers’ findings. Prof Kahng, we note, served as a reviewer for Nature during the peer-review process of Google’s paper. The university academics ultimately found their own recreation of the original Google code, referred to as circuit training (CT) in their study, actually performed worse than humans using traditional industry methods and tools.

What could have caused this discrepancy? One might say the recreation was incomplete, though there may be another explanation. Over time, the UCSD team learned Google had used commercial software developed by Synopsys, a major maker of electronic design automation (EDA) suites, to create a starting arrangement of the chip’s logic gates that the web giant’s reinforcement learning system then optimized. The Google paper did mention that industry-standard software tools and manual tweaking were used after the model had generated a layout, primarily to ensure the processor would work as intended and finalize it for fabrication. The Googlers argued this was a necessary step whether the floorplan was created by a machine-learning algorithm or by humans with standard tools, and thus its model deserved credit for the optimized end product. However, the UCSD team said there was no mention in the Nature paper of EDA tools being used beforehand to prepare a layout for the model to iterate over. It’s argued these Synopsys tools may have given the model a decent enough head start that the AI system’s true capabilities should be called into question.

The lead authors of Google’s paper, Azalia Mirhoseini and Anna Goldie, said the UCSD team’s work isn’t an accurate implementation of their method. They pointed out (PDF) that Prof Kahng’s group obtained worse results since they didn’t pre-train their model on any data at all. Prof Kahng’s team also did not train their system using the same amount of computing power as Google used, and suggested this step may not have been carried out properly, crippling the model’s performance. Mirhoseini and Goldie also said the pre-processing step using EDA applications that was not explicitly described in their Nature paper wasn’t important enough to mention. The UCSD group, however, said they didn’t pre-train their model because they didn’t have access to the Google proprietary data. They claimed, however, their software had been verified by two other engineers at the internet giant, who were also listed as co-authors of the Nature paper. Separately, a fired Google AI researcher claims the internet goliath’s research paper was “done in context of a large potential Cloud deal” worth $120 million at the time.

Read more of this story at Slashdot.

Google Security Researchers Accuse CentOS of Failing to Backport Kernel Fixes

An anonymous reader quotes Neowin:
Google Project Zero is a security team responsible for discovering security flaws in Google’s own products as well as software developed by other vendors. Following discovery, the issues are privately reported to vendors and they are given 90 days to fix the reported problems before they are disclosed publicly…. Now, the security team has reported several flaws in CentOS’ kernel.

As detailed in the technical document here, Google Project Zero’s security researcher Jann Horn learned that kernel fixes made to stable trees are not backported to many enterprise versions of Linux. To validate this hypothesis, Horn compared the CentOS Stream 9 kernel to the stable linux-5.15.y stable tree…. As expected, it turned out that several kernel fixes have not been made deployed in older, but supported versions of CentOS Stream/RHEL. Horn further noted that for this case, Project Zero is giving a 90-day deadline to release a fix, but in the future, it may allot even stricter deadlines for missing backports….

Red Hat accepted all three bugs reported by Horn and assigned them CVE numbers. However, the company failed to fix these issues in the allotted 90-day timeline, and as such, these vulnerabilities are being made public by Google Project Zero.
Horn is urging better patch scheduling so “an attacker who wants to quickly find a nice memory corruption bug in CentOS/RHEL can’t just find such bugs in the delta between upstream stable and your kernel.”

Read more of this story at Slashdot.

Fitbit Is Removing Many Community-Focused Features

Google-owned Fitbit is removing several community-focused features on March 27, including Challenges and open groups. Christine Persaud writes via XDA Developers reports: For me, challenges were one of Fitbit’s main strengths. You could strap a fitness tracker or smartwatch to your wrist, set up an account, and chances are at least a handful of your contacts were also Fitbit users. Then, you could add them as friends to compete and compare your progress. This seems like an insignificant “nice to have” feature, but the motivation it provides is precisely the aim of wearing a fitness tracker in the first place. And without open groups, you wouldn’t have the opportunity to get to know like-minded users from around the world.

This decision eliminates one of the platform’s best features: a sense of community. Reportedly, more than 31 million people use Fitbit at least once a week. That’s a staggering number and a group of customers ripe for creating and maintaining an active community. At a time when the market is flooded with competing fitness tracker and smartwatch brands, it has become increasingly difficult to stand out. According to Statista, Fitbit has been leading the wearables space since 2014, accounting for almost half the worldwide market share at 45%. The company’s solid grasp on the market (though it now faces stiff competition from the likes of Apple, Garmin, and others) is partly because of the unique Challenges and groups. While other companies, like Apple, have a version of Challenges, they’re not as robust as what Fitbit supports. “Nonetheless, for anyone new to the market looking for a fitness tracker or smartwatch that can do it all and connect them to a wealth of information and a community of people, this news makes Fitbit a less appealing platform to consider,” adds Persaud. “All we can do is hope for bigger and better things to come with Google integration in the future.”

Read more of this story at Slashdot.

Google Gives Apple Cut of Chrome iOS Search Revenue

According to The Register, Google has been paying Apple a portion of search revenue generated by people using Google Chrome on iOS. From the report: This is one of the aspects of the relationship between the two tech goliaths that currently concerns the UK’s Competition and Markets Authority (CMA). Though everyone knows Google pays Apple, Samsung, and other manufacturers billions of dollars to make its web search engine the default on devices, it has not been reported until now that the CMA has been looking into Chrome on iOS and its role in a search revenue sharing deal Google has with Apple. The British competition watchdog is worried that Google’s payments to Apple discourage the iPhone maker from competing with Google. Substantial payments for doing nothing incentivize more of the same, it’s argued. This perhaps explains why Apple, though hugely profitable, has not launched a rival search engine or invested in the development of its Safari browser to the point that it could become a credible challenger to Chrome.

Having Google pay Apple “a significant share of revenue from Google Search traffic” passing through its own Chrome browser on iOS is difficult to explain. Apple does not provide any obvious value to people seeking to use Google Search within Google Chrome. One attempt to explain the arrangement can be found in an antitrust lawsuit filed on December 27, 2021, and subsequently amended [PDF] on March 29, 2022. The complaint, filed by the Alioto Law Firm in San Francisco, claims Apple has been paid for the profits it would have made if it had competed with Google, without the cost and challenge of doing so. “Because more than half of Google’s search business was conducted through Apple devices, Apple was a major potential threat to Google, and that threat was designated by Google as ‘Code Red,'” the complaint contends. “Google paid billions of dollars to Apple and agreed to share its profits with Apple to eliminate the threat and fear of Apple as a competitor.”

These alleged revenue sharing arrangements — which are known in detail only to a limited number of people and have yet to be fully disclosed — have been noted by the UK CMA as well as the US Justice Department, which along with eleven US States, filed an antitrust complaint against Google on October 20, 2020. Reached by phone, attorney Joseph M. Alioto, who filed the private antitrust lawsuit, told The Register it would not surprise him to learn that Google has been paying Apple for search revenue derived from Chrome. He said Google’s deal with Apple, which began at $1 billion per year, reached as high as $15 billion annually in 2021. “The division of the market is per se illegal under the antitrust laws,” said Alioto. Apple and Google are currently trying to have the case dismissed citing lack of evidence of a horizontal agreement between the two companies, and other supposed deficiencies.

Read more of this story at Slashdot.

Think Twice Before Using Google To Download Software, Researchers Warn

Searching Google for downloads of popular software has always come with risks, but over the past few months, it has been downright dangerous, according to researchers and a pseudorandom collection of queries. Ars Technica reports: “Threat researchers are used to seeing a moderate flow of malvertising via Google Ads,” volunteers at Spamhaus wrote on Thursday. “However, over the past few days, researchers have witnessed a massive spike affecting numerous famous brands, with multiple malware being utilized. This is not “the norm.'”

The surge is coming from numerous malware families, including AuroraStealer, IcedID, Meta Stealer, RedLine Stealer, Vidar, Formbook, and XLoader. In the past, these families typically relied on phishing and malicious spam that attached Microsoft Word documents with booby-trapped macros. Over the past month, Google Ads has become the go-to place for criminals to spread their malicious wares that are disguised as legitimate downloads by impersonating brands such as Adobe Reader, Gimp, Microsoft Teams, OBS, Slack, Tor, and Thunderbird.

On the same day that Spamhaus published its report, researchers from security firm Sentinel One documented an advanced Google malvertising campaign pushing multiple malicious loaders implemented in .NET. Sentinel One has dubbed these loaders MalVirt. At the moment, the MalVirt loaders are being used to distribute malware most commonly known as XLoader, available for both Windows and macOS. XLoader is a successor to malware also known as Formbook. Threat actors use XLoader to steal contacts’ data and other sensitive information from infected devices. The MalVirt loaders use obfuscated virtualization to evade end-point protection and analysis. To disguise real C2 traffic and evade network detections, MalVirt beacons to decoy command and control servers hosted at providers including Azure, Tucows, Choopa, and Namecheap. “Until Google devises new defenses, the decoy domains and other obfuscation techniques remain an effective way to conceal the true control servers used in the rampant MalVirt and other malvertising campaigns,” concludes Ars. “It’s clear at the moment that malvertisers have gained the upper hand over Google’s considerable might.”

Read more of this story at Slashdot.