Swarming Drones Autonomously Navigate a Dense Forest

Chinese researchers show off a swarm of drones collectively navigating a dense forest they’ve never encountered. TechCrunch reports: Researchers at Zheijang University in Hangzhou have succeeded, however, with a 10-strong drone swarm smart enough to fly autonomously through a dense, unfamiliar forest, but small and light enough that each one can easily fit in the palm of your hand. It’s a big step toward using swarms like this for things like aerial surveying and disaster response.

Based on an off-the-shelf ultra-compact drone design, the team built a trajectory planner for the group that relies entirely on data from the onboard sensors of the swarm, which they process locally and share with each other. The drones can balance or be directed to pursue various goals, such as maintaining a certain distance from obstacles or each other, or minimizing the total flight time between two points, and so on.

The drones can also, worryingly, be given a task like “follow this human.” We’ve all seen enough movies to know this is how it starts … but of course it could be useful in rescue or combat circumstances as well. A part of their navigation involves mapping the world around them, of course, and the paper includes some very cool-looking 3D representations of the environments the swarm was sent through. Zhou et alThe study is published in the most recent issue of the journal Science Robotics, which you can read here, along with several videos showing off the drones in action.

Read more of this story at Slashdot.

Lyft Exec Craig Martell Tapped As Pentagon’s AI Chief

According to Breaking Defense, the head of machine learning at Lyft, Craig Martell, has been named the Pentagon’s Chief Digital and Artificial Intelligence Officer. From the report: The hiring of a Silicon Valley persona for the CDAO role is likely to be cheered by those in the defense community who have been calling for more technically-minded individuals to take leadership roles in the department. At the same time, Martell’s lack of Pentagon experience — he was a professor at the Naval Postgraduate School for over a decade studying AI for the military, but has never worked in the department’s bureaucracy — may pose challenges as he works to lead an office only months old. In an exclusive interview with Breaking Defense, Martell, who also worked as head of machine learning at Dropbox and led several AI teams at LinkedIn, acknowledged both the benefits and risks of bringing in someone with his background. […]

As CDAO, Martell will be responsible for scaling up DoD’s data, analytics and AI to enable quicker and more accurate decision-making and will also play an important role in the Pentagon’s Joint All-Domain Command and Control efforts to connect sensors and shooters. “If we’re going to be successful in achieving the goals, if we’re going to be successful in being competitive with China, we have to figure out where the best mission value can be found first and that’s going to have to drive what we build, what we design, the policies we come up with,” Martell said. “I just want to guard against making sure that we don’t do this in a vacuum, but we do it with real mission goals, real mission objectives in mind.”

His first order of business? Figuring out what needs to be done, and how to best use the $600 million in fiscal year 2023 dollars the CDAO’s office was marked for in the Pentagon’s most recent budget request. “So whenever I tackle a problem, whenever I go into a new organization, the first questions that I ask are: Do we have the right people? Do we have the right processes? Do we have the right tools to solve the visions [and] goals?” Martell said. To tackle that, Martell wants to identify the office’s “marquee customers” and figure out what’s “broken in terms of… people, platform, processes and tools” — a process that could take anywhere from three to six months, he added. “We really want to be customer-driven here,” Martell said. “We don’t want to walk in and say if we build it, they’ll come.”

Read more of this story at Slashdot.

Are You the Asshole? New AI Mimics Infamous Advice Subreddit

Two artists are illustrating bias in machine learning with data from one of the most opinionated sites on the internet: the r/AmITheAsshole subreddit. Motherboard reports: Now boasting 3.9 million users, the r/AmITheAsshole subreddit has become known as one of the leading forums where users can seek and share advice with virtual strangers. Internet artists Morry Kolman and Alex Petros trained three AI models using comments from over 60,000 posts from the popular subreddit. They filtered the comments according to the original subreddit’s formal voting guidelines where users vote on whether the poster is the asshole or if ESH (Everyone Sucks Here). What results is a positive bot, a negative bot, and a neutral bot that respond to every scenario submitted. The project, funded by Digital Void, aims to illustrate how training data can bias the decision-making abilities of artificial intelligence models. Now, their bots can help you answer the age-old question: Are you the asshole?

Users can submit their own moral dilemmas — real or not — and get a positive, negative, and swing response that can go either way. The three AI models are trained on data derived from Reddit users passing judgment so what results is a funny microcosm of what it’s like to debate on the internet now. Any topic can inspire strong, contradictory reactions from total strangers. “When reading the results of a judgment, note the way in which the AI constructs ideas from snippets of human reasoning,” their website reads. “Sometimes the AI can produce stunning results, but it is fundamentally attempting to mimic the ways that humans put together arguments.”

Read more of this story at Slashdot.

Social Media Made Us Stupid – and How to Fix It

Jonathan Haidt, a social psychologist at the New York University’s School of Business, argues in the Atlantic that social-media platforms “trained users to spend more time performing and less time connecting.” But that was just the beginning.

He now believes this ultimately fueled a viral dynamic leading to “the continual chipping-away of trust” in a democracy which “depends on widely internalized acceptance of the legitimacy of rules, norms, and institutions.”

The most recent Edelman Trust Barometer (an international measure of citizens’ trust in government, business, media, and nongovernmental organizations) showed stable and competent autocracies (China and the United Arab Emirates) at the top of the list, while contentious democracies such as the United States, the United Kingdom, Spain, and South Korea scored near the bottom (albeit above Russia)…. Mark Zuckerberg may not have wished for any of that. But by rewiring everything in a headlong rush for growth — with a naive conception of human psychology, little understanding of the intricacy of institutions, and no concern for external costs imposed on society — Facebook, Twitter, YouTube, and a few other large platforms unwittingly dissolved the mortar of trust, belief in institutions, and shared stories that had held a large and diverse secular democracy together.
In the last 10 years, the article argues, the general public — at least in America — became “uniquely stupid.” And he’s not just speaking about the political right and left, but within both factions, “as well as within universities, companies, professional associations, museums, and even families.” The article quotes former CIA analyst Martin Gurri’s comment in 2019 that the digital revolution has highly fragmented the public into hostile shards that are “mostly people yelling at each other and living in bubbles of one sort or another.”

The article concludes that by now U.S. politics has entered a phase where truth “cannot achieve widespread adherence” and thus “nothing really means anything anymore–at least not in a way that is durable and on which people widely agree.” It even contemplates the idea of “highly believable” disinformation generated by AI, possibly by geopolitical adversaries, ultimately evolving into what the research manager at the Stanford Internet Observatory has described as “an Information World War in which state actors, terrorists, and ideological extremists leverage the social infrastructure underpinning everyday life to sow discord and erode shared reality.”

But then the article also suggests possible reforms:
The Facebook whistleblower Frances Haugen advocates for simple changes to the architecture of the platforms, rather than for massive and ultimately futile efforts to police all content. For example, she has suggested modifying the “Share” function on Facebook so that after any content has been shared twice, the third person in the chain must take the time to copy and paste the content into a new post. Reforms like this…don’t stop anyone from saying anything; they just slow the spread of content that is, on average, less likely to be true.
Perhaps the biggest single change that would reduce the toxicity of existing platforms would be user verification as a precondition for gaining the algorithmic amplification that social media offers. Banks and other industries have “know your customer” rules so that they can’t do business with anonymous clients laundering money from criminal enterprises. Large social-media platforms should be required to do the same…. This one change would wipe out most of the hundreds of millions of bots and fake accounts that currently pollute the major platforms…. Research shows that antisocial behavior becomes more common online when people feel that their identity is unknown and untraceable.

In any case, the growing evidence that social media is damaging democracy is sufficient to warrant greater oversight by a regulatory body, such as the Federal Communications Commission or the Federal Trade Commission. One of the first orders of business should be compelling the platforms to share their data and their algorithms with academic researchers.

The members of Gen Z–those born in and after 1997–bear none of the blame for the mess we are in, but they are going to inherit it, and the preliminary signs are that older generations have prevented them from learning how to handle it…. Congress should update the Children’s Online Privacy Protection Act, which unwisely set the age of so-called internet adulthood (the age at which companies can collect personal information from children without parental consent) at 13 back in 1998, while making little provision for effective enforcement. The age should be raised to at least 16, and companies should be held responsible for enforcing it. More generally, to prepare the members of the next generation for post-Babel democracy, perhaps the most important thing we can do is let them out to play. Stop starving children of the experiences they most need to become good citizens: free play in mixed-age groups of children with minimal adult supervision…

The article closes with its own note of hope — and a call to action:

In recent years, Americans have started hundreds of groups and organizations dedicated to building trust and friendship across the political divide, including BridgeUSA, Braver Angels (on whose board I serve), and many others listed at BridgeAlliance.us. We cannot expect Congress and the tech companies to save us. We must change ourselves and our communities.

Read more of this story at Slashdot.

EU Clears First Autonomous X-Ray-Analyzing AI

An artificial intelligence tool that reads chest X-rays without oversight from a radiologist got regulatory clearance in the European Union last week — a first for a fully autonomous medical imaging AI, the company, called Oxipit, said in a statement. The Verge reports: The tool, called ChestLink, scans chest X-rays and automatically sends patient reports on those that it sees as totally healthy, with no abnormalities. Any images that the tool flags as having a potential problem are sent to a radiologist for review. Most X-rays in primary care don’t have any problems, so automating the process for those scans could cut down on radiologists’ workloads, the Oxipit said in informational materials.

The tech now has a CE mark certification in the EU, which signals that a device meets safety standards. The certification is similar to Food and Drug Administration (FDA) clearance in the United States, but they have slightly different metrics: a CE mark is less difficult to obtain, is quicker, and doesn’t require as much evaluation as an FDA clearance. The FDA looks to see if a device is safe and effective and tends to ask for more information from device makers. Oxipit spokesperson Mantas Miksys told The Verge that the company plans to file with the FDA as well.

Oxipit said in a statement that ChestLink made zero “clinically relevant” errors during pilot programs at multiple locations. When it is introduced into a new setting, the company said there should first be an audit of existing imaging programs. Then, the tool should be used under supervision for a period of time before it starts working autonomously. The company said in a statement that it expects the first healthcare organizations to be using the autonomous tool by 2023.

Read more of this story at Slashdot.

Face Scanner Clearview AI Aims To Branch Out Beyond Police

A controversial facial recognition company that’s built a massive photographic dossier of the world’s people for use by police, national governments and — most recently — the Ukrainian military is now planning to offer its technology to banks and other private businesses. The Washington Post reports: Clearview AI co-founder and CEO Hoan Ton-That disclosed the plans Friday to The Associated Press in order to clarify a recent federal court filing that suggested the company was up for sale. “We don’t have any plans to sell the company,” he said. Instead, he said the New York startup is looking to launch a new business venture to compete with the likes of Amazon and Microsoft in verifying people’s identity using facial recognition.

The new “consent-based” product would use Clearview’s algorithms to verify a person’s face, but would not involve its ever-growing trove of some 20 billion images, which Ton-That said is reserved for law enforcement use. Such ID checks that can be used to validate bank transactions or for other commercial purposes are the “least controversial use case” of facial recognition, he said. That’s in contrast to the business practice for which Clearview is best known: collecting a huge trove of images posted on Facebook, YouTube and just about anywhere else on the publicly-accessible internet.

Read more of this story at Slashdot.

Microsoft Details ‘Planet-Scale’ AI Infrastructure Packing 100,000+ GPUs

Microsoft has revealed it operates a planet-scale distributed scheduling service for AI workloads that it has modestly dubbed “Singularity.” The Register reports: Described in a pre-press paper [PDF] co-authored by 26 Microsoft employees, Singularity’s aim is described as helping the software giant control costs by driving high utilization for deep learning workloads. Singularity achieves that goal with what the paper describes as a “novel workload-aware scheduler that can transparently preempt and elastically scale deep learning workloads to drive high utilization without impacting their correctness or performance, across a global fleet of AI accelerators (e.g., GPUs, FPGAs).”

The paper spends more time on the scheduler than on Singularity itself, but does offer some figures to depict the system’s architecture. An analysis of Singularity’s performance mentions a test run on Nvidia DGX-2 servers using a Xeon Platinum 8168 with two sockets of 20 cores each, eight V100 Model GPUs per server, 692GB of RAM, and networked over InfiniBand. With hundreds of thousands of GPUs in the Singularity fleet, plus FPGAs and possibly other accelerators, Microsoft has at least tens of thousands of such servers! The paper focuses on Singularity’s scaling tech and schedulers, which it asserts are its secret sauce because they reduce cost and increase reliability.

The software automatically decouples jobs from accelerator resources, which means when jobs scale up or down “we simply change the number of devices the workers are mapped to: this is completely transparent to the user, as the world-size (i.e. total number of workers) of the job remains the same regardless of the number of physical devices running the job.” That’s possible thanks to “a novel technique called replica splicing that makes it possible to time-slice multiple workers on the same device with negligible overhead, while enabling each worker to use the entire device memory.” […] “Singularity achieves a significant breakthrough in scheduling deep learning workloads, converting niche features such as elasticity into mainstream, always-on features that the scheduler can rely on for implementing stringent SLAs,” the paper concludes.

Read more of this story at Slashdot.

Public Agencies Are Buying Up AI-Driven Hiring Tools and ‘Bossware’

Through public records requests, The Markup found more than 20 public agencies using the sometimes-controversial software. From the report: In 2020, the FDA’s Center for Drug Evaluation and Research (CDER) faced a daunting task: It needed to fill more than 900 job vacancies — and fast. The center, which does things like inspect pharmaceutical manufacturing facilities, was in the process of modernizing the FDA’s New Drugs Regulatory Program just as the pandemic started. It faced “a surge in work,” along with new constraints that have affected everyone during the pandemic, including travel limitations and lockdowns. So they decided to turn to an artificial intelligence tool to speed up the hiring, according to records obtained by The Markup. The center, along with the Office of Management and the Division of Management Services, the background section of a statement of work said, were developing a “recruitment plan to leverage artificial intelligence (AI) to assist in the time to hire process.”

The agency ultimately chose to use HireVue, an online platform that allows employers to review asynchronously recorded video interviews and have recruits play video games as part of their application process. Over the years the platform has also offered a variety of AI features to automatically score candidates. HireVue, controversially, used to offer facial analysis to predict whether an applicant would be a good fit for an open job. In recent years, research has shown that facial recognition software is racially biased. In 2019, the company’s continued use of the technique led one member of its scientific advisory board to resign. It has since stopped using facial recognition. The Markup used GovSpend, a database of procurement records for U.S. agencies at the state, local, and federal levels, to identify agencies that use HireVue. We also searched for agencies using Teramind and ActivTrak, both another kind of controversial software that allows employers to remotely monitor their workers’ browsing activities through screenshots and logs. The Markup contacted and filed public records requests with those 24 agencies to understand how they were using the software. Eleven public agencies, including the FDA, replied to The Markup with documents or confirmations that they had bought HireVue at some point since 2017. Of the six public agencies that replied to The Markup’s questions confirming that they actually used the software, all but one — Lake Travis Independent School District in Texas — confirmed they did not make use of the AI scoring features of the software. Documents and responses from 13 agencies confirmed that they purchased Teramind or ActivTrak at some point during the same time frame.

Read more of this story at Slashdot.

South Korea To Test AI-Powered Facial Recognition To Track COVID-19 Cases

South Korea will soon roll out a pilot project to use artificial intelligence, facial recognition and thousands of CCTV cameras to track the movement of people infected with the coronavirus, despite concerns about the invasion of privacy. Reuters reports: The nationally funded project in Bucheon, one of the country’s most densely populated cities on the outskirts of Seoul, is due to become operational in January, a city official told Reuters. The system uses an AI algorithms and facial recognition technology to analyze footage gathered by more than 10,820 CCTV cameras and track an infected person’s movements, anyone they had close contact with, and whether they were wearing a mask, according to a 110-page business plan from the city submitted to the Ministry of Science and ICT (Information and Communications Technology), and provided to Reuters by a parliamentary lawmaker critical of the project.

The Bucheon official said the system should reduce the strain on overworked tracing teams in a city with a population of more than 800,000 people, and help use the teams more efficiently and accurately. […] The Ministry of Science and ICT said it has no current plans to expand the project to the national level. It said the purpose of the system was to digitize some of the manual labour that contact tracers currently have to carry out. The Bucheon system can simultaneously track up to ten people in five to ten minutes, cutting the time spent on manual work that takes around half an hour to one hour to trace one person, the plan said.

Read more of this story at Slashdot.