Will ‘Precision Agriculture’ Be Harmful to Farmers?

Modern U.S. farming is being transformed by precision agriculture, writes Paul Roberts, the founder of securepairs.org and Editor in Chief at Security Ledger.

Theres autonomous tractors and “smart spraying” systems that use AI-powered cameras to identify weeds, just for starters. “Among the critical components of precision agriculture: Internet- and GPS connected agricultural equipment, highly accurate remote sensors, ‘big data’ analytics and cloud computing…”

As with any technological revolution, however, there are both “winners” and “losers” in the emerging age of precision agriculture… Precision agriculture, once broadly adopted, promises to further reduce the need for human labor to run farms. (Autonomous equipment means you no longer even need drivers!) However, the risks it poses go well beyond a reduction in the agricultural work force. First, as the USDA notes on its website: the scale and high capital costs of precision agriculture technology tend to favor large, corporate producers over smaller farms. Then there are the systemic risks to U.S. agriculture of an increasingly connected and consolidated agriculture sector, with a few major OEMs having the ability to remotely control and manage vital equipment on millions of U.S. farms… (Listen to my podcast interview with the hacker Sick Codes, who reverse engineered a John Deere display to run the Doom video game for insights into the company’s internal struggles with cybersecurity.)

Finally, there are the reams of valuable and proprietary environmental and operational data that farmers collect, store and leverage to squeeze the maximum productivity out of their land. For centuries, such information resided in farmers’ heads, or on written or (more recently) digital records that they owned and controlled exclusively, typically passing that knowledge and data down to succeeding generation of farm owners. Precision agriculture technology greatly expands the scope, and granularity, of that data. But in doing so, it also wrests it from the farmer’s control and shares it with equipment manufacturers and service providers — often without the explicit understanding of the farmers themselves, and almost always without monetary compensation to the farmer for the data itself. In fact, the Federal Government is so concerned about farm data they included a section (1619) on “information gathering” into the latest farm bill.
Over time, this massive transfer of knowledge from individual farmers or collectives to multinational corporations risks beggaring farmers by robbing them of one of their most vital assets: data, and turning them into little more than passive caretakers of automated equipment managed, controlled and accountable to distant corporate masters.

Weighing in is Kevin Kenney, a vocal advocate for the “right to repair” agricultural equipment (and also an alternative fuel systems engineer at Grassroots Energy LLC). In the interview, he warns about the dangers of tying repairs to factory-installed firmware, and argues that its the long-time farmer’s “trade secrets” that are really being harvested today. The ultimate beneficiary could end up being the current “cabal” of tractor manufacturers.

“While we can all agree that it’s coming…the question is who will own these robots?”

First, we need to acknowledge that there are existing laws on the books which for whatever reason, are not being enforced. The FTC should immediately start an investigation into John Deere and the rest of the ‘Tractor Cabal’ to see to what extent farmers’ farm data security and privacy are being compromised. This directly affects national food security because if thousands- or tens of thousands of tractors’ are hacked and disabled or their data is lost, crops left to rot in the fields would lead to bare shelves at the grocery store… I think our universities have also been delinquent in grasping and warning farmers about the data-theft being perpetrated on farmers’ operations throughout the United States and other countries by makers of precision agricultural equipment.

Thanks to long-time Slashdot reader chicksdaddy for sharing the article.

Read more of this story at Slashdot.

Scientists Propose AI Apocalypse Kill Switches

A paper (PDF) from researchers at the University of Cambridge, supported by voices from numerous academic institutions including OpenAI, proposes remote kill switches and lockouts as methods to mitigate risks associated with advanced AI technologies. It also recommends tracking AI chip sales globally. The Register reports: The paper highlights numerous ways policymakers might approach AI hardware regulation. Many of the suggestions — including those designed to improve visibility and limit the sale of AI accelerators — are already playing out at a national level. Last year US president Joe Biden put forward an executive order aimed at identifying companies developing large dual-use AI models as well as the infrastructure vendors capable of training them. If you’re not familiar, “dual-use” refers to technologies that can serve double duty in civilian and military applications. More recently, the US Commerce Department proposed regulation that would require American cloud providers to implement more stringent “know-your-customer” policies to prevent persons or countries of concern from getting around export restrictions. This kind of visibility is valuable, researchers note, as it could help to avoid another arms race, like the one triggered by the missile gap controversy, where erroneous reports led to massive build up of ballistic missiles. While valuable, they warn that executing on these reporting requirements risks invading customer privacy and even lead to sensitive data being leaked.

Meanwhile, on the trade front, the Commerce Department has continued to step up restrictions, limiting the performance of accelerators sold to China. But, as we’ve previously reported, while these efforts have made it harder for countries like China to get their hands on American chips, they are far from perfect. To address these limitations, the researchers have proposed implementing a global registry for AI chip sales that would track them over the course of their lifecycle, even after they’ve left their country of origin. Such a registry, they suggest, could incorporate a unique identifier into each chip, which could help to combat smuggling of components.

At the more extreme end of the spectrum, researchers have suggested that kill switches could be baked into the silicon to prevent their use in malicious applications. […] The academics are clearer elsewhere in their study, proposing that processor functionality could be switched off or dialed down by regulators remotely using digital licensing: “Specialized co-processors that sit on the chip could hold a cryptographically signed digital “certificate,” and updates to the use-case policy could be delivered remotely via firmware updates. The authorization for the on-chip license could be periodically renewed by the regulator, while the chip producer could administer it. An expired or illegitimate license would cause the chip to not work, or reduce its performance.” In theory, this could allow watchdogs to respond faster to abuses of sensitive technologies by cutting off access to chips remotely, but the authors warn that doing so isn’t without risk. The implication being, if implemented incorrectly, that such a kill switch could become a target for cybercriminals to exploit.

Another proposal would require multiple parties to sign off on potentially risky AI training tasks before they can be deployed at scale. “Nuclear weapons use similar mechanisms called permissive action links,” they wrote. For nuclear weapons, these security locks are designed to prevent one person from going rogue and launching a first strike. For AI however, the idea is that if an individual or company wanted to train a model over a certain threshold in the cloud, they’d first need to get authorization to do so. Though a potent tool, the researchers observe that this could backfire by preventing the development of desirable AI. The argument seems to be that while the use of nuclear weapons has a pretty clear-cut outcome, AI isn’t always so black and white. But if this feels a little too dystopian for your tastes, the paper dedicates an entire section to reallocating AI resources for the betterment of society as a whole. The idea being that policymakers could come together to make AI compute more accessible to groups unlikely to use it for evil, a concept described as “allocation.”

Read more of this story at Slashdot.

Largest Text-To-Speech AI Model Yet Shows ‘Emergent Abilities’

Devin Coldeway reports via TechCrunch: Researchers at Amazon have trained the largest ever text-to-speech model yet, which they claim exhibits “emergent” qualities improving its ability to speak even complex sentences naturally. The breakthrough could be what the technology needs to escape the uncanny valley. These models were always going to grow and improve, but the researchers specifically hoped to see the kind of leap in ability that we observed once language models got past a certain size. For reasons unknown to us, once LLMs grow past a certain point, they start being way more robust and versatile, able to perform tasks they weren’t trained to. That is not to say they are gaining sentience or anything, just that past a certain point their performance on certain conversational AI tasks hockey sticks. The team at Amazon AGI — no secret what they’re aiming at — thought the same might happen as text-to-speech models grew as well, and their research suggests this is in fact the case.

The new model is called Big Adaptive Streamable TTS with Emergent abilities, which they have contorted into the abbreviation BASE TTS. The largest version of the model uses 100,000 hours of public domain speech, 90% of which is in English, the remainder in German, Dutch and Spanish. At 980 million parameters, BASE-large appears to be the biggest model in this category. They also trained 400M- and 150M-parameter models based on 10,000 and 1,000 hours of audio respectively, for comparison — the idea being, if one of these models shows emergent behaviors but another doesn’t, you have a range for where those behaviors begin to emerge. As it turns out, the medium-sized model showed the jump in capability the team was looking for, not necessarily in ordinary speech quality (it is reviewed better but only by a couple points) but in the set of emergent abilities they observed and measured. Here are examples of tricky text mentioned in the paper:

– Compound nouns: The Beckhams decided to rent a charming stone-built quaint countryside holiday cottage.
– Emotions: “Oh my gosh! Are we really going to the Maldives? That’s unbelievable!” Jennie squealed, bouncing on her toes with uncontained glee.
– Foreign words: “Mr. Henry, renowned for his mise en place, orchestrated a seven-course meal, each dish a piece de resistance.
– Paralinguistics (i.e. readable non-words): “Shh, Lucy, shhh, we mustn’t wake your baby brother,” Tom whispered, as they tiptoed past the nursery.
– Punctuations: She received an odd text from her brother: ‘Emergency @ home; call ASAP! Mom & Dad are worried… #familymatters.’
– Questions: But the Brexit question remains: After all the trials and tribulations, will the ministers find the answers in time?
-Syntactic complexities: The movie that De Moya who was recently awarded the lifetime achievement award starred in 2022 was a box-office hit, despite the mixed reviews. You can read more examples of these difficult texts being spoken naturally here.

Read more of this story at Slashdot.

In Big Tech’s Backyard, a California State Lawmaker Unveils a Landmark AI Bill

An anonymous reader shared this report from the Washington Post:

A California state lawmaker introduced a bill on Thursday aiming to force companies to test the most powerful artificial intelligence models before releasing them — a landmark proposal that could inspire regulation around the country as state legislatures increasingly tackle the swiftly evolving technology.

The new bill, sponsored by state Sen. Scott Wiener, a Democrat who represents San Francisco, would require companies training new AI models to test their tools for “unsafe” behavior, institute hacking protections and develop the tech in such a way that it can be shut down completely, according to a copy of the bill. AI companies would have to disclose testing protocols and what guardrails they put in place to the California Department of Technology. If the tech causes “critical harm,” the state’s attorney general can sue the company.

Wiener’s bill comes amid an explosion of state bills addressing artificial intelligence, as policymakers across the country grow wary that years of inaction in Congress have created a regulatory vacuum that benefits the tech industry. But California, home to many of the world’s largest technology companies, plays a singular role in setting precedent for tech industry guardrails. “You can’t work in software development and ignore what California is saying or doing,” said Lawrence Norden, the senior director of the Brennan Center’s Elections and Government Program… Wiener says he thinks the bill can be passed by the fall.
The article notes there’s now 407 AI-related bills “active in 44 U.S. states (according to an analysis by an industry group called BSA the Software Alliance) — with several already signed into law. “The proliferation of state-level bills could lead to greater industry pressure on Congress to pass AI legislation, because complying with a federal law may be easier than responding to a patchwork of different state laws.”
Even the proposed California law “largely builds off an October executive order by President Biden,” according to the article, “that uses emergency powers to require companies to perform safety tests on powerful AI systems and share those results with the federal government. The California measure goes further than the executive order, to explicitly require hacking protections, protect AI-related whistleblowers and force companies to conduct testing.”

They also add that as America’s most populous U.S. state, “California has unique power to set standards that have impact across the country.” And the group behind last year’s statement on AI risk helped draft the legislation, according to the article, though Weiner says he also consulted tech workers, CEOs, and activists. “We’ve done enormous stakeholder outreach over the past year.”

Read more of this story at Slashdot.

Microsoft AI Engineer Says Company Thwarted Attempt To Expose DALL-E 3 Safety Problems

Todd Bishop reports via GeekWire: A Microsoft AI engineering leader says he discovered vulnerabilities in OpenAI’s DALL-E 3 image generator in early December allowing users to bypass safety guardrails to create violent and explicit images, and that the company impeded his previous attempt to bring public attention to the issue. The emergence of explicit deepfake images of Taylor Swift last week “is an example of the type of abuse I was concerned about and the reason why I urged OpenAI to remove DALL-E 3 from public use and reported my concerns to Microsoft,” writes Shane Jones, a Microsoft principal software engineering lead, in a letter Tuesday to Washington state’s attorney general and Congressional representatives.

404 Media reported last week that the fake explicit images of Swift originated in a “specific Telegram group dedicated to abusive images of women,” noting that at least one of the AI tools commonly used by the group is Microsoft Designer, which is based in part on technology from OpenAI’s DALL-E 3. “The vulnerabilities in DALL-E 3, and products like Microsoft Designer that use DALL-E 3, makes it easier for people to abuse AI in generating harmful images,” Jones writes in the letter to U.S. Sens. Patty Murray and Maria Cantwell, Rep. Adam Smith, and Attorney General Bob Ferguson, which was obtained by GeekWire. He adds, “Microsoft was aware of these vulnerabilities and the potential for abuse.”

Jones writes that he discovered the vulnerability independently in early December. He reported the vulnerability to Microsoft, according to the letter, and was instructed to report the issue to OpenAI, the Redmond company’s close partner, whose technology powers products including Microsoft Designer. He writes that he did report it to OpenAI. “As I continued to research the risks associated with this specific vulnerability, I became aware of the capacity DALL-E 3 has to generate violent and disturbing harmful images,” he writes. “Based on my understanding of how the model was trained, and the security vulnerabilities I discovered, I reached the conclusion that DALL-E 3 posed a public safety risk and should be removed from public use until OpenAI could address the risks associated with this model.”

On Dec. 14, he writes, he posted publicly on LinkedIn urging OpenAI’s non-profit board to withdraw DALL-E 3 from the market. He informed his Microsoft leadership team of the post, according to the letter, and was quickly contacted by his manager, saying that Microsoft’s legal department was demanding that he delete the post immediately, and would follow up with an explanation or justification. He agreed to delete the post on that basis but never heard from Microsoft legal, he writes. “Over the following month, I repeatedly requested an explanation for why I was told to delete my letter,” he writes. “I also offered to share information that could assist with fixing the specific vulnerability I had discovered and provide ideas for making AI image generation technology safer. Microsoft’s legal department has still not responded or communicated directly with me.” “Artificial intelligence is advancing at an unprecedented pace. I understand it will take time for legislation to be enacted to ensure AI public safety,” he adds. “At the same time, we need to hold companies accountable for the safety of their products and their responsibility to disclose known risks to the public. Concerned employees, like myself, should not be intimidated into staying silent.” The full text of Jones’ letter can be read here (PDF).

Read more of this story at Slashdot.

Apple’s Large Language Model Shows Up in New iOS Code

An anonymous reader shares a report: Apple is widely expected to unveil major new artificial intelligence features with iOS 18 in June. Code found by 9to5Mac in the first beta of iOS 17.4 shows that Apple is continuing to work on a new version of Siri powered by large language model technology, with a little help from other sources. In fact, Apple appears to be using OpenAI’s ChatGPT API for internal testing to help the development of its own AI models. According to this code, iOS 17.4 includes a new SiriSummarization private framework that makes calls to the OpenAI’s ChatGPT API. This appears to be something Apple is using for internal testing of its new AI features. There are multiple examples of system prompts for the SiriSummarization framework in iOS 17.4 as well. This includes things like “please summarize,” “please answer this questions,” and “please summarize the given text.”

Apple is unlikely to use OpenAI models to power any of its artificial intelligence features in iOS 18. Instead, what it’s doing here is testing its own AI models against ChatGPT. For example, the SiriSummarization framework can do summarization using on-device models. Apple appears to be using its own AI models to power this framework, then internally comparing its results against the results of ChatGPT. In total, iOS 17.4 code suggests Apple is testing four different AI models. This includes Apple’s internal model called “Ajax,” which Bloomberg has previously reported. iOS 17.4 shows that there are two versions of AjaxGPT, including one that is processed on-device and one that is not.

Read more of this story at Slashdot.

FTC Launches Inquiry Into AI Deals by Tech Giants

The Federal Trade Commission launched an inquiry (non-paywalled link) on Thursday into the multibillion-dollar investments by Microsoft, Amazon and Google in the artificial intelligence start-ups OpenAI and Anthropic, broadening the regulator’s efforts to corral the power the tech giants can have over A.I. The New York Times: These deals have allowed the big companies to form deep ties with their smaller rivals while dodging most government scrutiny. Microsoft has invested billions of dollars in OpenAI, the maker of ChatGPT, while Amazon and Google have each committed billions of dollars to Anthropic, another leading A.I. start-up.

Regulators have typically focused on bringing antitrust lawsuits against deals where the tech giants are buying rivals outright or using acquisitions to expand into new businesses, leading to increased prices and other harm, and have not regularly challenged stakes that the companies buy in start-ups. The F.T.C.’s inquiry will examine how these investment deals alter the competitive landscape and could inform any investigations by federal antitrust regulators into whether the deals have broken laws.

The F.T.C. said it would ask Microsoft, OpenAI, Amazon, Google and Anthropic to describe their influence over their partners and how they worked together to make decisions. It also said it would demand that they provide any internal documents that could shed light on the deals and their potential impact on competition.

Read more of this story at Slashdot.

OpenAI Quietly Scrapped a Promise To Disclose Key Documents To the Public

From its founding, OpenAI said its governing documents were available to the public. When WIRED requested copies after the company’s boardroom drama, it declined to provide them. Wired: Wealthy tech entrepreneurs including Elon Musk launched OpenAI in 2015 as a nonprofit research lab that they said would involve society and the public in the development of powerful AI, unlike Google and other giant tech companies working behind closed doors. In line with that spirit, OpenAI’s reports to US tax authorities have from its founding said that any member of the public can review copies of its governing documents, financial statements, and conflict of interest rules. But when WIRED requested those records last month, OpenAI said its policy had changed, and the company provided only a narrow financial statement that omitted the majority of its operations.

“We provide financial statements when requested,” company spokesperson Niko Felix says. “OpenAI aligns our practices with industry standards, and since 2022 that includes not publicly distributing additional internal documents.” OpenAI’s abandonment of the long-standing transparency pledge obscures information that could shed light on the recent near-implosion of a company with crucial influence over the future of AI and could help outsiders understand its vulnerabilities. In November, OpenAI’s board fired CEO Sam Altman, implying in a statement that he was untrustworthy and had endangered its mission to ensure AI “benefits all humanity.” An employee and investor revolt soon forced the board to reinstate Altman and eject most of its own members, with an overhauled slate of directors vowing to review the crisis and enact structural changes to win back the trust of stakeholders.

Read more of this story at Slashdot.