Binance Concealed Ties To China For Years, Even After 2017 Crypto Crackdown, Report Finds

Binance CEO Changpeng “CZ” Zhao and other senior executives have been for years concealing the crypto exchange ties with China, according to documents obtained by the Financial Times. CoinTelegraph reports: In a report on March 29, FT claims that Binance had substantial ties to China for several years, contrary to the company’s claims that it left the country after a 2017 ban on crypto, including an office still in use by the end of 2019 and a Chinese bank used to pay employees. “We no longer publish our office addresses … people in China can directly say that our office is not in China,” Zhao reportedly said in a company message group in November 2017. Employees were told in 2018 that wages would be paid through a Shanghai-based bank. A year later, personnel on payroll in China were required to attend tax sessions in an office based in the country, according to FT. Based on the messages, Binance employees discussed a media report that claimed the company would open an office in Beijing in 2019. “Reminder: publicly, we have offices in Malta, Singapore, and Uganda. […] Please do not confirm any offices anywhere else, including China.”

The report backs up accusations made in a lawsuit filed on March 27 by the United States Commodity Futures Trading Commission (CFTC) against the exchange, claiming that Binance obscured the location of its executive offices, as well as the “identities and locations of the entities operating the trading platform.” According to the lawsuit, Zhao stated in an internal Binance memo that the policy was intended to “keep countries clean [of violations of law]” by “not landing .com anywhere. This is the main reason .com does not land anywhere.”

Read more of this story at Slashdot.

Google’s Claims of Super-Human AI Chip Layout Back Under the Microscope

A Google-led research paper published in Nature, claiming machine-learning software can design better chips faster than humans, has been called into question after a new study disputed its results. The Register reports: In June 2021, Google made headlines for developing a reinforcement-learning-based system capable of automatically generating optimized microchip floorplans. These plans determine the arrangement of blocks of electronic circuitry within the chip: where things such as the CPU and GPU cores, and memory and peripheral controllers, actually sit on the physical silicon die. Google said it was using this AI software to design its homegrown TPU chips that accelerate AI workloads: it was employing machine learning to make its other machine-learning systems run faster. The research got the attention of the electronic design automation community, which was already moving toward incorporating machine-learning algorithms into their software suites. Now Google’s claims of its better-than-humans model has been challenged by a team at the University of California, San Diego (UCSD).

Led by Andrew Kahng, a professor of computer science and engineering, that group spent months reverse engineering the floorplanning pipeline Google described in Nature. The web giant withheld some details of its model’s inner workings, citing commercial sensitivity, so the UCSD had to figure out how to make their own complete version to verify the Googlers’ findings. Prof Kahng, we note, served as a reviewer for Nature during the peer-review process of Google’s paper. The university academics ultimately found their own recreation of the original Google code, referred to as circuit training (CT) in their study, actually performed worse than humans using traditional industry methods and tools.

What could have caused this discrepancy? One might say the recreation was incomplete, though there may be another explanation. Over time, the UCSD team learned Google had used commercial software developed by Synopsys, a major maker of electronic design automation (EDA) suites, to create a starting arrangement of the chip’s logic gates that the web giant’s reinforcement learning system then optimized. The Google paper did mention that industry-standard software tools and manual tweaking were used after the model had generated a layout, primarily to ensure the processor would work as intended and finalize it for fabrication. The Googlers argued this was a necessary step whether the floorplan was created by a machine-learning algorithm or by humans with standard tools, and thus its model deserved credit for the optimized end product. However, the UCSD team said there was no mention in the Nature paper of EDA tools being used beforehand to prepare a layout for the model to iterate over. It’s argued these Synopsys tools may have given the model a decent enough head start that the AI system’s true capabilities should be called into question.

The lead authors of Google’s paper, Azalia Mirhoseini and Anna Goldie, said the UCSD team’s work isn’t an accurate implementation of their method. They pointed out (PDF) that Prof Kahng’s group obtained worse results since they didn’t pre-train their model on any data at all. Prof Kahng’s team also did not train their system using the same amount of computing power as Google used, and suggested this step may not have been carried out properly, crippling the model’s performance. Mirhoseini and Goldie also said the pre-processing step using EDA applications that was not explicitly described in their Nature paper wasn’t important enough to mention. The UCSD group, however, said they didn’t pre-train their model because they didn’t have access to the Google proprietary data. They claimed, however, their software had been verified by two other engineers at the internet giant, who were also listed as co-authors of the Nature paper. Separately, a fired Google AI researcher claims the internet goliath’s research paper was “done in context of a large potential Cloud deal” worth $120 million at the time.

Read more of this story at Slashdot.

Bill Gates Predicts ‘The Age of AI Has Begun’

Bill Gates calls the invention of AI “as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone,” predicting “Entire industries will reorient around it” in an essay titled “The AI Age has Begun.”
In my lifetime, I’ve seen two demonstrations of technology that struck me as revolutionary. The first time was in 1980, when I was introduced to a graphical user interface — the forerunner of every modern operating system, including Windows…. The second big surprise came just last year. I’d been meeting with the team from OpenAI since 2016 and was impressed by their steady progress. In mid-2022, I was so excited about their work that I gave them a challenge: train an artificial intelligence to pass an Advanced Placement biology exam. Make it capable of answering questions that it hasn’t been specifically trained for. (I picked AP Bio because the test is more than a simple regurgitation of scientific facts — it asks you to think critically about biology.) If you can do that, I said, then you’ll have made a true breakthrough.

I thought the challenge would keep them busy for two or three years. They finished it in just a few months. In September, when I met with them again, I watched in awe as they asked GPT, their AI model, 60 multiple-choice questions from the AP Bio exam — and it got 59 of them right. Then it wrote outstanding answers to six open-ended questions from the exam. We had an outside expert score the test, and GPT got a 5 — the highest possible score, and the equivalent to getting an A or A+ in a college-level biology course. Once it had aced the test, we asked it a non-scientific question: “What do you say to a father with a sick child?” It wrote a thoughtful answer that was probably better than most of us in the room would have given. The whole experience was stunning.

I knew I had just seen the most important advance in technology since the graphical user interface.

Some predictions from Gates:

“Eventually your main way of controlling a computer will no longer be pointing and clicking or tapping on menus and dialogue boxes. Instead, you’ll be able to write a request in plain English….”

“Advances in AI will enable the creation of a personal agent… It will see your latest emails, know about the meetings you attend, read what you read, and read the things you don’t want to bother with.”

“I think in the next five to 10 years, AI-driven software will finally deliver on the promise of revolutionizing the way people teach and learn. It will know your interests and your learning style so it can tailor content that will keep you engaged. It will measure your understanding, notice when you’re losing interest, and understand what kind of motivation you respond to. It will give immediate feedback.”

“AIs will dramatically accelerate the rate of medical breakthroughs. The amount of data in biology is very large, and it’s hard for humans to keep track of all the ways that complex biological systems work. There is already software that can look at this data, infer what the pathways are, search for targets on pathogens, and design drugs accordingly. Some companies are working on cancer drugs that were developed this way.”

AI will “help health-care workers make the most of their time by taking care of certain tasks for them — things like filing insurance claims, dealing with paperwork, and drafting notes from a doctor’s visit. I expect that there will be a lot of innovation in this area…. AIs will even give patients the ability to do basic triage, get advice about how to deal with health problems, and decide whether they need to seek treatment.”

Read more of this story at Slashdot.

Some Apple Employees Fear Its $3,000 Mixed-Reality Headset Could Flop

An anonymous reader shares this report from AppleInsider:
Apple has allegedly demonstrated its mixed reality headset to its top executives recently, in an attempt to generate excitement for the upcoming platform launch. While executives are keen on the product, others within Apple are not sure it’s a home run hit. Eight anonymous current and former employees told the New York Times that they are skeptical about the headset, despite Apple’s apparent glossy demonstration of the technology.

Manufacturing has already begun for a June release of the $3,000 headset, insiders say in the Times’ article:

Some employees have defected from the project because of their doubts about its potential, three people with knowledge of the moves said. Others have been fired over the lack of progress with some aspects of the headset, including its use of Apple’s Siri voice assistant, one person said.Even leaders at Apple have questioned the product’s prospects. It has been developed at a time when morale has been strained by a wave of departures from the company’s design team, including Mr. Ive, who left Apple in 2019 and stopped advising the company last year….

Because the headset won’t fit over glasses, the company has plans to sell prescription lenses for the displays to people who don’t wear contacts, a person familiar with the plan said. During the device’s development, Apple has focused on making it excel for videoconferencing and spending time with others as avatars in a virtual world. The company has called the device’s signature application “copresence,” a word designed to capture the experience of sharing a real or virtual space with someone in another place. It is akin to what Mark Zuckerberg, Facebook’s founder, calls the “metaverse….”

But the road to deliver augmented reality has been littered with failures, false starts and disappointments, from Google Glass to Magic Leap and from Microsoft’s HoloLens to Meta’s Quest Pro. Apple is considered a potential savior because of its success combining new hardware and software to create revolutionary devices.
Still, the challenges are daunting.

Read more of this story at Slashdot.