New Hampshire Set To Pilot Voting Machines That Use Open-Source Software

According to The Record, New Hampshire will pilot a new kind of voting machine that will use open-source software to tally the votes. The Record reports: The software that runs voting machines is typically distributed in a kind of black box — like a car with its hood sealed shut. Because the election industry in the U.S. is dominated by three companies — Dominion, Election Systems & Software and Hart InterCivic — the software that runs their machines is private. The companies consider it their intellectual property and that has given rise to a roster of unfounded conspiracy theories about elections and their fairness. New Hampshire’s experiment with open-source software is meant to address exactly that. The software by its very design allows you to pop the hood, modify the code, make suggestions for how to make it better, and work with other people to make it run more smoothly. The thinking is, if voting machines run on software anyone can audit and run, it is less likely to give rise to allegations of vote rigging.

The effort to make voting machines more transparent is the work of a group called VotingWorks. […] On November 8, VotingWorks machines will be used in a real election in real time. New Hampshire is the second state to use the open-source machines after Mississippi first did so in 2019. Some 3,000 voters will run their paper ballots through the new machines, and then, to ensure nothing went awry, those same votes will be hand counted in a public session in Concord, N.H. Anyone who cares to will be able to see if the new machines recorded the votes correctly. The idea is to make clear there is nothing to hide. If someone is worried that a voting machine is programmed to flip a vote to their opponent, they can simply hire a computer expert to examine it and see, in real time.

Read more of this story at Slashdot.

Google’s Text-To-Image AI Model Imagen Is Getting Its First (Very Limited) Public Outing

Google’s text-to-image AI system, Imagen, is being added to the company’s AI Test Kitchen app as a way to collect early feedback on the technology. The Verge reports: AI Test Kitchen was launched earlier this year as a way for Google to beta test various AI systems. Currently, the app offers a few different ways to interact with Google’s text model LaMDA (yes, the same one that the engineer thought was sentient), and the company will soon be adding similarly constrained Imagen requests as part of what it calls a “season two” update to the app. In short, there’ll be two ways to interact with Imagen, which Google demoed to The Verge ahead of the announcement today: “City Dreamer” and “Wobble.”

In City Dreamer, users can ask the model to generate elements from a city designed around a theme of their choice — say, pumpkins, denim, or the color blerg. Imagen creates sample buildings and plots (a town square, an apartment block, an airport, and so on), with all the designs appearing as isometric models similar to what you’d see in SimCity. In Wobble, you create a little monster. You can choose what it’s made out of (clay, felt, marzipan, rubber) and then dress it in the clothing of your choice. The model generates your monster, gives it a name, and then you can sort of poke and prod the thing to make it “dance.” Again, the model’s output is constrained to a very specific aesthetic, which, to my mind, looks like a cross between Pixar’s designs for Monsters, Inc. and the character creator feature in Spore. (Someone on the AI team must be a Will Wright fan.) These interactions are extremely constrained compared to other text-to-image models, and users can’t just request anything they’d like. That’s intentional on Google’s part, though. As Josh Woodward, senior director of product management at Google, explained to The Verge, the whole point of AI Test Kitchen is to a) get feedback from the public on these AI systems and b) find out more about how people will break them.

Google wouldn’t share any data on how many people are actually using AI Test Kitchen (“We didn’t set out to make this a billion user Google app,” says Woodward) but says the feedback it’s getting is invaluable. “Engagement is way above our expectations,” says Woodward. “It’s a very active, opinionated group of users.” He notes the app has been useful in reaching “certain types of folks — researchers, policymakers” who can use it to better understand the limitations and capabilities of state-of-the-art AI models. Still, the big question is whether Google will want to push these models to a wider public and, if so, what form will that take? Already, the company’s rivals, OpenAI and Stability AI, are rushing to commercialize text-to-image models. Will Google ever feel its systems are safe enough to take out of the AI Test Kitchen and serve up to its users?

Read more of this story at Slashdot.

Meta’s AI-Powered Audio Codec Promises 10x Compression Over MP3

Last week, Meta announced an AI-powered audio compression method called “EnCodec” that can reportedly compress audio 10 times smaller than the MP3 format at 64kbps with no loss in quality. Meta says this technique could dramatically improve the sound quality of speech on low-bandwidth connections, such as phone calls in areas with spotty service. The technique also works for music. Ars Technica reports: Meta debuted the technology on October 25 in a paper titled “High Fidelity Neural Audio Compression,” authored by Meta AI researchers Alexandre Defossez, Jade Copet, Gabriel Synnaeve, and Yossi Adi. Meta also summarized the research on its blog devoted to EnCodec.

Meta describes its method as a three-part system trained to compress audio to a desired target size. First, the encoder transforms uncompressed data into a lower frame rate “latent space” representation. The “quantizer” then compresses the representation to the target size while keeping track of the most important information that will later be used to rebuild the original signal. (This compressed signal is what gets sent through a network or saved to disk.) Finally, the decoder turns the compressed data back into audio in real time using a neural network on a single CPU.

Meta’s use of discriminators proves key to creating a method for compressing the audio as much as possible without losing key elements of a signal that make it distinctive and recognizable: “The key to lossy compression is to identify changes that will not be perceivable by humans, as perfect reconstruction is impossible at low bit rates. To do so, we use discriminators to improve the perceptual quality of the generated samples. This creates a cat-and-mouse game where the discriminator’s job is to differentiate between real samples and reconstructed samples. The compression model attempts to generate samples to fool the discriminators by pushing the reconstructed samples to be more perceptually similar to the original samples.”

Read more of this story at Slashdot.

Tumblr Will Now Allow Nudity But Not Explicit Sex

Tumblr has made an update it hinted at in September, changing its rules to allow nudity — but not sexually explicit images — on the platform. The Verge reports: The company updated its community guidelines earlier today, laying out a set of rules that stops short of its earlier permissive attitude toward sexuality but that formally allows a wider range of imagery. “We now welcome a broader range of expression, creativity, and art on Tumblr, including content depicting the human form (yes, that includes the naked human form). So, even if your creations contain nudity, mature subject matter, or sexual themes, you can now share them on Tumblr using the appropriate Community Label,” the post says. “Visual depictions of sexually explicit acts remain off-limits on Tumblr.”

A help center post and the community guidelines offer a little more detail. They say that “text, images, and videos that contain nudity, offensive language, sexual themes, or mature subject matter” is allowed on Tumblr, but “visual depictions of sexually explicit acts (or content with an overt focus on genitalia)” aren’t. There’s an exception for “historically significant art that you may find in a mainstream museum and which depicts sex acts — such as from India’s Sunga Empire,” although it must be labeled with a mature content or “sexual themes” tag so that users can filter it from their dashboards.

“Nudity and other kinds of adult material are generally welcome. We’re not here to judge your art, we just ask that you add a Community Label to your mature content so that people can choose to filter it out of their Dashboard if they prefer,” say the community guidelines. However, users can’t post links or ads to “adult-oriented affiliate networks,” they can’t advertise “escort or erotic services,” and they can’t post content that “promotes pedophilia,” including “sexually suggestive” content with images of children. On December 17th, 2018, Tumblr permanently banned adult content from its platform. The site was owned by Verizon at the time and later sold to WordPress.com owner Automattic, which largely maintained the ban “in large part because internet infrastructure services — like payment processors and Apple’s iOS App Store — typically frown on explicit adult content,” reports The Verge.

Read more of this story at Slashdot.

NYC Employers Can No Longer Hide Salary Ranges In Job Listings

Starting Tuesday, New York City employers must disclose salary information in job ads, thanks to a new pay transparency law that will reverberate nationwide. Axios reports: What’s happening: Employers have spent months getting ready for this. They’ll now have to post salary ranges for open roles — but many didn’t have any established pay bands at all, says Allan Bloom, a partner at Proskauer who’s advising companies. Already, firms like American Express, JPMorgan Chase and Macy’s have added pay bands to their help-wanted ads, reports the Wall Street Journal.

How it works: Companies with more than four employees must post a salary range for any open role that’s performed in the city — or could be performed in the city. Violators could ultimately be fined up to $250,000 — though a first offense just gets a warning.

Reality check: It’s a pretty squishy requirement. The law requires only that salary ranges be in “good faith” — and there’s no penalty for paying someone outside of the range posted. It will be difficult for enforcement officials to prove a salary range is in bad faith, Bloom says. “The low-hanging fruit will be [going after] employers that don’t post any range whatsoever.” Many of the ranges posted online now are pretty wide. A senior analyst role advertised on the Macy’s jobs site is listed as paying between $85,320 and $142,080 a year. A senior podcast producer role at the WSJ advertises an “NYC pay range” of $50,000 – $180,000. The wide ranges could be particularly reasonable if these roles can be performed remotely, as some companies adjust pay according to location.

Read more of this story at Slashdot.

Electric Scooter Ban Increased Congestion In Atlanta By 10%, Study Finds

A study published last week in the scientific journal Nature Energy studied the effects of traffic and travel time in a city when micromobility options like electric scooters and e-bikes are banned. The results documented exactly how much traffic increased as a result of people switching back to personal cars instead of smaller, more urban-appropriate vehicles. Electrek reports: The study, titled “Impacts of micromobility on car displacement with evidence from a natural experiment and geofencing policy,” was performed using data collected in Atlanta. The study was made possible due to the city’s sudden ban on shared micromobility devices at night. That ban provided a unique opportunity to compare traffic levels and travel times before and after the policy change. The ban occurred on August 9, 2019, and restricted use of shared e-bikes and e-scooters in the city between the hours of 9 p.m. and 4 a.m. The study’s authors used high-resolution data from June 25, 2019, to September 22, 2019, from Uber Movement to measure changes in evening travel times before and after the policy implementation. That created a window of analysis of 45 days with and without shared e-bike and e-scooter use at night.

The study found that on average, travel times for car trips in Atlanta during evening hours increased between 9.9-10.7% immediately following the ban on shared micromobility. For an average commuter in Atlanta, that translated to an extra 2-5 minutes per evening trip. The authors also concluded that the impact on commute times would likely be higher in other cities across the country. According the study, “based on the estimated US average commute time of 27.6 minutes in 2019, the results from our natural experiment imply a 17.4% increase in travel time nationally.”

The study went on to consider the economic impact of that added congestion and increased travel time. […] The economic impact on the city of Atlanta was calculated at US $4.9 million. The study estimated this impact on the national level could be in the range of US $408M to $573 million. Interestingly, the entirety of the study’s data comes from before the COVID-19 pandemic, which played a major role in promoting the use of shared micromobility. A similar study performed today could find an even greater impact on congestion, travel times, and economic impact on cities.

Read more of this story at Slashdot.

Crypto Lender Hodlnaut Lost Nearly $190 Million in TerraUSD Drop

Embattled cryptocurrency lender Hodlnaut downplayed its exposure to the collapsed digital-token ecosystem created by fugitive Do Kwon yet suffered a near $190 million loss from the wipeout. Bloomberg reports:
The loss is among the findings of an interim judicial managers’ report seen by Bloomberg News. It is the first such report since a Singapore court in August granted Hodlnaut protection from creditors to come up with a recovery plan. “It appears that the directors had downplayed the extent of the group’s exposure to Terra/Luna both during the period leading up to and following the Terra/Luna collapse in May 2022,” the report said.

Kwon’s TerraUSD algorithmic stablecoin and sister token Luna suffered a $60 billion wipeout in May as confidence in the project evaporated, exacerbating this year’s crypto meltdown. Hodlnaut’s Hong Kong arm made the near $190 million loss when it offloaded the stablecoin as its claimed dollar peg frayed. In a letter dated July 21, Hodlnaut’s directors “made an about-turn” about the impact and informed a Singapore police department that digital assets had been converted to TerraUSD, according to the report. Much of the latter was lent out on the Anchor Protocol, the report said, a decentralized finance platform developed on the Terra blockchain.

Hodlnaut, which operates out of Singapore and Hong Kong, halted withdrawals in August. The judicial report said more than 1,000 deleted documents from Hodlnaut’s Google workspace could have helped shed light on the business. The judicial managers haven’t been able to obtain several “key documents” in relation to Hodlnaut’s Hong Kong arm, which owes $58.3 million to Hodlnaut Pte in Singapore. About S$776,292 appeared to have been withdrawn by some employees between July and when withdrawals were halted in August, the report stated. Most of the company’s investments into DeFi were made via the Hong Kong division, it added.

Read more of this story at Slashdot.