India Gambles On Building a Leading Drone Industry

The Indian government wants to develop a home-grown industry that can design and assemble drones and make the components that go into their manufacture. The BBC reports: “Drones can be significant creators of employment and economic growth due to their versatility, and ease of use, especially in India’s remote areas,” says Amber Dubey, former joint secretary at the Ministry of Civil Aviation. “Given its traditional strengths in innovation, information technology, frugal engineering and its huge domestic demand, India has the potential of becoming a global drone hub by 2030,” he tells the BBC. Over the next three years Mr Dubey sees as much as 50 billion rupees $630 million invested in the sector.

[…] However, despite the excitement and investment around India’s drone industry, even those in the sector advise caution. “India has set a goal of being a hub of drones by 2030, but I think we should be cautious because we at present don’t not have an ecosystem and technology initiatives in place,” says Rajiv Kumar Narang, from the Drone Federation of India. He says the industry needs a robust regulator that can oversee safety and help develop an air traffic control system for drones. That will be particularly important as the aircraft become larger, says Mr Narang. “Initiatives have to come from the government. A single entity or a nodal ministry has to take this forward if we want to reach a goal of being the hub by 2030,” he says.

India also lacks the network of firms needed to make all the components that go into making a drone. At the moment many parts, including batteries, motors and flight controllers are imported. But the government is confident an incentive scheme will help boost domestic firms. “The components industry will take two to three years to build, since it traditionally works on low margin and high volumes,” says Mr Dubey. Despite those reservations, firms are confident there will be demand for drones and people to fly them. Chirag Shara is the chief executive of Drone Destination, which has trained more than 800 pilots and instructors since the rules on drone use were first relaxed in August 2021. He estimates that India will need up to 500,000 certified pilots over the next five years.

Read more of this story at Slashdot.

Instagram Jumps Into NFTs With Minting and Selling Feature

Meta’s Instagram will soon allow artists to create and sell their own NFTs both right on the social media platform and off it. Axios reports: IG’s feature will roll out to a small group of select creators in the U.S. to start, according to Meta. The first creators tapped to test the feature include photographer Isaac âoeDriftâ Wright, known as DrifterShoots, and artist Amber Vittoria. Meta won’t charge fees for posting or sharing an NFT on IG, though, app store fees still apply. Separately, there will be a “professional mode” for Facebook profiles for creators to build a social media presence separate from their personal one. Artist royalties appear to be a part of the plan.

Minting or the creating of NFTs on IG will start on Polygon, a boon for the layer-2 blockchain (a separate blockchain built on top of Ethereum) given the potential onboarding of IG’s billion active users. The price of Polygon’s token MATIC jumped 17% from Wednesday evening to Thursday morning, boosted by IG news but also, because JPMorgan conducted its first live DeFi trade using that blockchain. The platform is adding support for Solana blockchain and Phantom wallet with the latest feature update, adding them to the list of already-supported wallets such as MetaMask, Coinbase Wallet, Dapper Wallet, Rainbow and Trust Wallet. Ethereum and Flow blockchains are already supported. Info for selected collections with OpenSea metadata, like collection name and description, will show up on IG.

Read more of this story at Slashdot.

New Hampshire Set To Pilot Voting Machines That Use Open-Source Software

According to The Record, New Hampshire will pilot a new kind of voting machine that will use open-source software to tally the votes. The Record reports: The software that runs voting machines is typically distributed in a kind of black box — like a car with its hood sealed shut. Because the election industry in the U.S. is dominated by three companies — Dominion, Election Systems & Software and Hart InterCivic — the software that runs their machines is private. The companies consider it their intellectual property and that has given rise to a roster of unfounded conspiracy theories about elections and their fairness. New Hampshire’s experiment with open-source software is meant to address exactly that. The software by its very design allows you to pop the hood, modify the code, make suggestions for how to make it better, and work with other people to make it run more smoothly. The thinking is, if voting machines run on software anyone can audit and run, it is less likely to give rise to allegations of vote rigging.

The effort to make voting machines more transparent is the work of a group called VotingWorks. […] On November 8, VotingWorks machines will be used in a real election in real time. New Hampshire is the second state to use the open-source machines after Mississippi first did so in 2019. Some 3,000 voters will run their paper ballots through the new machines, and then, to ensure nothing went awry, those same votes will be hand counted in a public session in Concord, N.H. Anyone who cares to will be able to see if the new machines recorded the votes correctly. The idea is to make clear there is nothing to hide. If someone is worried that a voting machine is programmed to flip a vote to their opponent, they can simply hire a computer expert to examine it and see, in real time.

Read more of this story at Slashdot.

Google’s Text-To-Image AI Model Imagen Is Getting Its First (Very Limited) Public Outing

Google’s text-to-image AI system, Imagen, is being added to the company’s AI Test Kitchen app as a way to collect early feedback on the technology. The Verge reports: AI Test Kitchen was launched earlier this year as a way for Google to beta test various AI systems. Currently, the app offers a few different ways to interact with Google’s text model LaMDA (yes, the same one that the engineer thought was sentient), and the company will soon be adding similarly constrained Imagen requests as part of what it calls a “season two” update to the app. In short, there’ll be two ways to interact with Imagen, which Google demoed to The Verge ahead of the announcement today: “City Dreamer” and “Wobble.”

In City Dreamer, users can ask the model to generate elements from a city designed around a theme of their choice — say, pumpkins, denim, or the color blerg. Imagen creates sample buildings and plots (a town square, an apartment block, an airport, and so on), with all the designs appearing as isometric models similar to what you’d see in SimCity. In Wobble, you create a little monster. You can choose what it’s made out of (clay, felt, marzipan, rubber) and then dress it in the clothing of your choice. The model generates your monster, gives it a name, and then you can sort of poke and prod the thing to make it “dance.” Again, the model’s output is constrained to a very specific aesthetic, which, to my mind, looks like a cross between Pixar’s designs for Monsters, Inc. and the character creator feature in Spore. (Someone on the AI team must be a Will Wright fan.) These interactions are extremely constrained compared to other text-to-image models, and users can’t just request anything they’d like. That’s intentional on Google’s part, though. As Josh Woodward, senior director of product management at Google, explained to The Verge, the whole point of AI Test Kitchen is to a) get feedback from the public on these AI systems and b) find out more about how people will break them.

Google wouldn’t share any data on how many people are actually using AI Test Kitchen (“We didn’t set out to make this a billion user Google app,” says Woodward) but says the feedback it’s getting is invaluable. “Engagement is way above our expectations,” says Woodward. “It’s a very active, opinionated group of users.” He notes the app has been useful in reaching “certain types of folks — researchers, policymakers” who can use it to better understand the limitations and capabilities of state-of-the-art AI models. Still, the big question is whether Google will want to push these models to a wider public and, if so, what form will that take? Already, the company’s rivals, OpenAI and Stability AI, are rushing to commercialize text-to-image models. Will Google ever feel its systems are safe enough to take out of the AI Test Kitchen and serve up to its users?

Read more of this story at Slashdot.