‘Vulkan Files’ Leak Reveals Putin’s Global and Domestic Cyberwarfare Tactics

“The Gaurdian reports on a document leak from Russian cyber ‘security’ company Vulkan,” writes Slashdot reader Falconhell. From the report: Inside the six-storey building, a new generation is helping Russian military operations. Its weapons are more advanced than those of Peter the Great’s era: not pikes and halberds, but hacking and disinformation tools. The software engineers behind these systems are employees of NTC Vulkan. On the surface, it looks like a run-of-the-mill cybersecurity consultancy. However, a leak of secret files from the company has exposed its work bolstering Vladimir Putin’s cyberwarfare capabilities.

Thousands of pages of secret documents reveal how Vulkan’s engineers have worked for Russian military and intelligence agencies to support hacking operations, train operatives before attacks on national infrastructure, spread disinformation and control sections of the internet. The company’s work is linked to the federal security service or FSB, the domestic spy agency; the operational and intelligence divisions of the armed forces, known as the GOU and GRU; and the SVR, Russia’s foreign intelligence organization.

One document links a Vulkan cyber-attack tool with the notorious hacking group Sandworm, which the US government said twice caused blackouts in Ukraine, disrupted the Olympics in South Korea and launched NotPetya, the most economically destructive malware in history. Codenamed Scan-V, it scours the internet for vulnerabilities, which are then stored for use in future cyber-attacks. Another system, known as Amezit, amounts to a blueprint for surveilling and controlling the internet in regions under Russia’s command, and also enables disinformation via fake social media profiles. A third Vulkan-built system — Crystal-2V — is a training program for cyber-operatives in the methods required to bring down rail, air and sea infrastructure. A file explaining the software states: “The level of secrecy of processed and stored information in the product is ‘Top Secret’.”

Read more of this story at Slashdot.

Meta Wants EU Users To Apply For Permission To Opt Out of Data Collection

Meta announced that starting next Wednesday, some Facebook and Instagram users in the European Union will for the first time be able to opt out of sharing first-party data used to serve highly personalized ads, The Wall Street Journal reported. The move marks a big change from Meta’s current business model, where every video and piece of content clicked on its platforms provides a data point for its online advertisers. Ars Technica reports: People “familiar with the matter” told the Journal that Facebook and Instagram users will soon be able to access a form that can be submitted to Meta to object to sweeping data collection. If those requests are approved, those users will only allow Meta to target ads based on broader categories of data collection, like age range or general location. This is different from efforts by other major tech companies like Apple and Google, which prompt users to opt in or out of highly personalized ads with the click of a button. Instead, Meta will review objection forms to evaluate reasons provided by individual users to end such data collection before it will approve any opt-outs. It’s unclear what cause Meta may have to deny requests.

A Meta spokesperson told Ars that Meta is not sharing the objection form publicly at this time but that it will be available to EU users in its Help Center starting on April 5. That’s the deadline Meta was given to comply with an Irish regulator’s rulings that it was illegal in the EU for Meta to force Facebook and Instagram users to give consent to data collection when they signed contracts to use the platforms. Meta still plans to appeal those Irish Data Protection Commission (DPC) rulings, believing that its prior contract’s legal basis complies with the EU’s General Data Protection Regulation (GDPR). In the meantime, though, the company must change the legal basis for data collection. Meta announced in a blog post today that it will now argue that it does not need to directly obtain user consent because it has a “legitimate interest” to collect data to operate its social platforms. “We believe that our previous approach was compliant under GDPR, and our appeal on both the substance of the rulings and the fines continues,” Meta’s blog said. “However, this change ensures that we comply with the DPC’s decision.”

Read more of this story at Slashdot.

China Shuts Down Major Manga Piracy Site Following Complaint From Japan

Anti-piracy group CODA is reporting the shutdown of B9Good, a pirate manga site that targeted Japan but was operated from China. In response to a criminal complaint filed by CODA on behalf of six Japanese companies, which were backed by 21 others during the investigation, Chinese authorities arrested four people and seized one house worth $580,000. TorrentFreak reports: Manga piracy site B9Good initially appeared in 2008 and established itself under B9DM branding. SimilarWeb stats show that the site was enjoying around 15 million visits each month, with CODA noting that in the two-year period leading to February 2023, the site was accessed more than 300 million times Around 95% of the site’s visitors came from Japan. B9Good had been featured in an MPA submission to the USTR’s notorious markets report in 2019. Traffic was reported as almost 16 million visits per month back then, meaning that site visitor numbers remained stable for the next three years. The MPA said the site was possibly hosted in Canada, but domain records since then show a wider spread, including Hong Kong, China, United States, Bulgaria, and Japan.

Wherever the site ended up, the location of its operator was more important. In 2021, CODA launched its International Enforcement Project (CBEP), which aimed to personally identify the operators of pirate sites, including those behind B9Good who were eventually traced to China. Pursuing copyright cases from outside China is reportedly difficult, but CODA had a plan. In January 2022, CODA’s Beijing office was recognized as an NGO with legitimate standing to protect the rights of its member companies. Working on behalf of Aniplex, TV Tokyo, Toei Animation, Toho, Japan Broadcasting Corporation (NHK), and Bandai Namco Film Works, CODA filed a criminal complaint in China, and starting February 14, 2023, local authorities began rounding up the B9Good team.

Read more of this story at Slashdot.

‘Pausing AI Developments Isn’t Enough. We Need To Shut It All Down’

Earlier today, more than 1,100 artificial intelligence experts, industry leaders and researchers signed a petition calling on AI developers to stop training models more powerful than OpenAI’s ChatGPT-4 for at least six months. Among those who refrained from signing it was Eliezer Yudkowsky, a decision theorist from the U.S. and lead researcher at the Machine Intelligence Research Institute. He’s been working on aligning Artificial General Intelligence since 2001 and is widely regarded as a founder of the field.

“This 6-month moratorium would be better than no moratorium,” writes Yudkowsky in an opinion piece for Time Magazine. “I refrained from signing because I think the letter is understating the seriousness of the situation and asking for too little to solve it.” Yudkowsky cranks up the rhetoric to 100, writing: “If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.” Here’s an excerpt from his piece: The key issue is not “human-competitive” intelligence (as the open letter puts it); it’s what happens after AI gets to smarter-than-human intelligence. Key thresholds there may not be obvious, we definitely can’t calculate in advance what happens when, and it currently seems imaginable that a research lab would cross critical lines without noticing. […] It’s not that you can’t, in principle, survive creating something much smarter than you; it’s that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers. […]

It took more than 60 years between when the notion of Artificial Intelligence was first proposed and studied, and for us to reach today’s capabilities. Solving safety of superhuman intelligence — not perfect safety, safety in the sense of “not killing literally everyone” — could very reasonably take at least half that long. And the thing about trying this with superhuman intelligence is that if you get that wrong on the first try, you do not get to learn from your mistakes, because you are dead. Humanity does not learn from the mistake and dust itself off and try again, as in other challenges we’ve overcome in our history, because we are all gone.

Trying to get anything right on the first really critical try is an extraordinary ask, in science and in engineering. We are not coming in with anything like the approach that would be required to do it successfully. If we held anything in the nascent field of Artificial General Intelligence to the lesser standards of engineering rigor that apply to a bridge meant to carry a couple of thousand cars, the entire field would be shut down tomorrow. We are not prepared. We are not on course to be prepared in any reasonable time window. There is no plan. Progress in AI capabilities is running vastly, vastly ahead of progress in AI alignment or even progress in understanding what the hell is going on inside those systems. If we actually do this, we are all going to die. You can read the full letter signed by AI leaders here.

Read more of this story at Slashdot.

Binance Concealed Ties To China For Years, Even After 2017 Crypto Crackdown, Report Finds

Binance CEO Changpeng “CZ” Zhao and other senior executives have been for years concealing the crypto exchange ties with China, according to documents obtained by the Financial Times. CoinTelegraph reports: In a report on March 29, FT claims that Binance had substantial ties to China for several years, contrary to the company’s claims that it left the country after a 2017 ban on crypto, including an office still in use by the end of 2019 and a Chinese bank used to pay employees. “We no longer publish our office addresses … people in China can directly say that our office is not in China,” Zhao reportedly said in a company message group in November 2017. Employees were told in 2018 that wages would be paid through a Shanghai-based bank. A year later, personnel on payroll in China were required to attend tax sessions in an office based in the country, according to FT. Based on the messages, Binance employees discussed a media report that claimed the company would open an office in Beijing in 2019. “Reminder: publicly, we have offices in Malta, Singapore, and Uganda. […] Please do not confirm any offices anywhere else, including China.”

The report backs up accusations made in a lawsuit filed on March 27 by the United States Commodity Futures Trading Commission (CFTC) against the exchange, claiming that Binance obscured the location of its executive offices, as well as the “identities and locations of the entities operating the trading platform.” According to the lawsuit, Zhao stated in an internal Binance memo that the policy was intended to “keep countries clean [of violations of law]” by “not landing .com anywhere. This is the main reason .com does not land anywhere.”

Read more of this story at Slashdot.

Google’s Claims of Super-Human AI Chip Layout Back Under the Microscope

A Google-led research paper published in Nature, claiming machine-learning software can design better chips faster than humans, has been called into question after a new study disputed its results. The Register reports: In June 2021, Google made headlines for developing a reinforcement-learning-based system capable of automatically generating optimized microchip floorplans. These plans determine the arrangement of blocks of electronic circuitry within the chip: where things such as the CPU and GPU cores, and memory and peripheral controllers, actually sit on the physical silicon die. Google said it was using this AI software to design its homegrown TPU chips that accelerate AI workloads: it was employing machine learning to make its other machine-learning systems run faster. The research got the attention of the electronic design automation community, which was already moving toward incorporating machine-learning algorithms into their software suites. Now Google’s claims of its better-than-humans model has been challenged by a team at the University of California, San Diego (UCSD).

Led by Andrew Kahng, a professor of computer science and engineering, that group spent months reverse engineering the floorplanning pipeline Google described in Nature. The web giant withheld some details of its model’s inner workings, citing commercial sensitivity, so the UCSD had to figure out how to make their own complete version to verify the Googlers’ findings. Prof Kahng, we note, served as a reviewer for Nature during the peer-review process of Google’s paper. The university academics ultimately found their own recreation of the original Google code, referred to as circuit training (CT) in their study, actually performed worse than humans using traditional industry methods and tools.

What could have caused this discrepancy? One might say the recreation was incomplete, though there may be another explanation. Over time, the UCSD team learned Google had used commercial software developed by Synopsys, a major maker of electronic design automation (EDA) suites, to create a starting arrangement of the chip’s logic gates that the web giant’s reinforcement learning system then optimized. The Google paper did mention that industry-standard software tools and manual tweaking were used after the model had generated a layout, primarily to ensure the processor would work as intended and finalize it for fabrication. The Googlers argued this was a necessary step whether the floorplan was created by a machine-learning algorithm or by humans with standard tools, and thus its model deserved credit for the optimized end product. However, the UCSD team said there was no mention in the Nature paper of EDA tools being used beforehand to prepare a layout for the model to iterate over. It’s argued these Synopsys tools may have given the model a decent enough head start that the AI system’s true capabilities should be called into question.

The lead authors of Google’s paper, Azalia Mirhoseini and Anna Goldie, said the UCSD team’s work isn’t an accurate implementation of their method. They pointed out (PDF) that Prof Kahng’s group obtained worse results since they didn’t pre-train their model on any data at all. Prof Kahng’s team also did not train their system using the same amount of computing power as Google used, and suggested this step may not have been carried out properly, crippling the model’s performance. Mirhoseini and Goldie also said the pre-processing step using EDA applications that was not explicitly described in their Nature paper wasn’t important enough to mention. The UCSD group, however, said they didn’t pre-train their model because they didn’t have access to the Google proprietary data. They claimed, however, their software had been verified by two other engineers at the internet giant, who were also listed as co-authors of the Nature paper. Separately, a fired Google AI researcher claims the internet goliath’s research paper was “done in context of a large potential Cloud deal” worth $120 million at the time.

Read more of this story at Slashdot.