NYC Employers Can No Longer Hide Salary Ranges In Job Listings

Starting Tuesday, New York City employers must disclose salary information in job ads, thanks to a new pay transparency law that will reverberate nationwide. Axios reports: What’s happening: Employers have spent months getting ready for this. They’ll now have to post salary ranges for open roles — but many didn’t have any established pay bands at all, says Allan Bloom, a partner at Proskauer who’s advising companies. Already, firms like American Express, JPMorgan Chase and Macy’s have added pay bands to their help-wanted ads, reports the Wall Street Journal.

How it works: Companies with more than four employees must post a salary range for any open role that’s performed in the city — or could be performed in the city. Violators could ultimately be fined up to $250,000 — though a first offense just gets a warning.

Reality check: It’s a pretty squishy requirement. The law requires only that salary ranges be in “good faith” — and there’s no penalty for paying someone outside of the range posted. It will be difficult for enforcement officials to prove a salary range is in bad faith, Bloom says. “The low-hanging fruit will be [going after] employers that don’t post any range whatsoever.” Many of the ranges posted online now are pretty wide. A senior analyst role advertised on the Macy’s jobs site is listed as paying between $85,320 and $142,080 a year. A senior podcast producer role at the WSJ advertises an “NYC pay range” of $50,000 – $180,000. The wide ranges could be particularly reasonable if these roles can be performed remotely, as some companies adjust pay according to location.

Read more of this story at Slashdot.

Electric Scooter Ban Increased Congestion In Atlanta By 10%, Study Finds

A study published last week in the scientific journal Nature Energy studied the effects of traffic and travel time in a city when micromobility options like electric scooters and e-bikes are banned. The results documented exactly how much traffic increased as a result of people switching back to personal cars instead of smaller, more urban-appropriate vehicles. Electrek reports: The study, titled “Impacts of micromobility on car displacement with evidence from a natural experiment and geofencing policy,” was performed using data collected in Atlanta. The study was made possible due to the city’s sudden ban on shared micromobility devices at night. That ban provided a unique opportunity to compare traffic levels and travel times before and after the policy change. The ban occurred on August 9, 2019, and restricted use of shared e-bikes and e-scooters in the city between the hours of 9 p.m. and 4 a.m. The study’s authors used high-resolution data from June 25, 2019, to September 22, 2019, from Uber Movement to measure changes in evening travel times before and after the policy implementation. That created a window of analysis of 45 days with and without shared e-bike and e-scooter use at night.

The study found that on average, travel times for car trips in Atlanta during evening hours increased between 9.9-10.7% immediately following the ban on shared micromobility. For an average commuter in Atlanta, that translated to an extra 2-5 minutes per evening trip. The authors also concluded that the impact on commute times would likely be higher in other cities across the country. According the study, “based on the estimated US average commute time of 27.6 minutes in 2019, the results from our natural experiment imply a 17.4% increase in travel time nationally.”

The study went on to consider the economic impact of that added congestion and increased travel time. […] The economic impact on the city of Atlanta was calculated at US $4.9 million. The study estimated this impact on the national level could be in the range of US $408M to $573 million. Interestingly, the entirety of the study’s data comes from before the COVID-19 pandemic, which played a major role in promoting the use of shared micromobility. A similar study performed today could find an even greater impact on congestion, travel times, and economic impact on cities.

Read more of this story at Slashdot.

Crypto Lender Hodlnaut Lost Nearly $190 Million in TerraUSD Drop

Embattled cryptocurrency lender Hodlnaut downplayed its exposure to the collapsed digital-token ecosystem created by fugitive Do Kwon yet suffered a near $190 million loss from the wipeout. Bloomberg reports:
The loss is among the findings of an interim judicial managers’ report seen by Bloomberg News. It is the first such report since a Singapore court in August granted Hodlnaut protection from creditors to come up with a recovery plan. “It appears that the directors had downplayed the extent of the group’s exposure to Terra/Luna both during the period leading up to and following the Terra/Luna collapse in May 2022,” the report said.

Kwon’s TerraUSD algorithmic stablecoin and sister token Luna suffered a $60 billion wipeout in May as confidence in the project evaporated, exacerbating this year’s crypto meltdown. Hodlnaut’s Hong Kong arm made the near $190 million loss when it offloaded the stablecoin as its claimed dollar peg frayed. In a letter dated July 21, Hodlnaut’s directors “made an about-turn” about the impact and informed a Singapore police department that digital assets had been converted to TerraUSD, according to the report. Much of the latter was lent out on the Anchor Protocol, the report said, a decentralized finance platform developed on the Terra blockchain.

Hodlnaut, which operates out of Singapore and Hong Kong, halted withdrawals in August. The judicial report said more than 1,000 deleted documents from Hodlnaut’s Google workspace could have helped shed light on the business. The judicial managers haven’t been able to obtain several “key documents” in relation to Hodlnaut’s Hong Kong arm, which owes $58.3 million to Hodlnaut Pte in Singapore. About S$776,292 appeared to have been withdrawn by some employees between July and when withdrawals were halted in August, the report stated. Most of the company’s investments into DeFi were made via the Hong Kong division, it added.

Read more of this story at Slashdot.

Why Google Is Removing JPEG-XL Support From Chrome

Following yesterday’s article about Google Chrome preparing to deprecate the JPEG-XL image format, a Google engineer has now provided their reasons for dropping this next-generation image format. Phoronix reports: As noted yesterday, a patch is pending for the Google Chrome/Chromium browser to deprecate the still-experimental (behind a feature flag) JPEG-XL image format support from their web browser. The patch marks Chrome 110 and later as deprecating JPEG-XL image support. No reasoning was provided for this deprecation, which is odd considering JPEG-XL is still very young in its lifecycle and has been receiving growing industry interest and support.

Now this evening is a comment from a Google engineer on the Chromium JPEG-XL issue tracker with their expressed reasons: “Thank you everyone for your comments and feedback regarding JPEG XL. We will be removing the JPEG XL code and flag from Chromium for the following reasons:

– Experimental flags and code should not remain indefinitely
– There is not enough interest from the entire ecosystem to continue experimenting with JPEG XL
– The new image format does not bring sufficient incremental benefits over existing formats to warrant enabling it by default
– By removing the flag and the code in M110, it reduces the maintenance burden and allows us to focus on improving existing formats in Chrome”

Read more of this story at Slashdot.

Can Talking to Strangers Make Us Smarter?

Smartphones “have made it easier than ever to avoid interacting with the people in our immediate environment, writes New York City-based author Joe Keohane.

But is that always good? “Some social scientists believe teaching kids that literally everyone in the world they hadn’t met is dangerous may have been actively harmful.”

For several years, I researched why we don’t talk to strangers and what happens when we do for my book, The Power of Strangers: The Benefits of Connecting in a Suspicious World. This effort put me in the company of anthropologists, psychologists, sociologists, political scientists, archeologists, urban designers, activists, philosophers, and theologians, plus hundreds of random strangers I talked to wherever I went. What I learned was this: we miss a lot by being afraid of strangers. Talking to strangers — under the right conditions — is good for us, good for our neighborhoods, our towns and cities, our nations, and our world. Talking to strangers can teach you things, deepen you, make you a better citizen, a better thinker, and a better person.

It’s a good way to live. But it’s more than that. In a rapidly changing, infinitely complex, furiously polarised world, it’s a way to survive….

Talking to strangers can also make us wiser, more worldly, and more empathetic, says Harvard University professor and MacArthur “genius grant” recipient, Danielle Allen. When she was teaching at the University of Chicago, Allen was repeatedly warned by colleagues to stay away from the poorer side of town. She believes that this “fear of strangers was actually eroding a lot of [her peers’] intellectual and social capacities”. She declined to stay away, and did some of her most admired work in those neighbourhoods. She has since devoted her career to fostering connections between people and groups that otherwise would not interact. “Real knowledge of what’s outside one’s garden cures fear,” Allen writes, “but only by talking to strangers can we come by such knowledge.”

By talking to strangers, you get a glimpse of the mind-boggling complexity of the human species, and the infinite variety of human experiences. It’s a cliché, but you get to see the world from the eyes of another, without which wisdom is impossible…. When these interactions go well — and they generally do — the positive perception of the stranger can generalise into better feelings about people. For me — and many of the respected experts and complete strangers I’ve spoken to — it comes down to a question of data. If I based all my perceptions of humanity on what is available through my phone or laptop, I would have a fantastically negative view of most other people.

Read more of this story at Slashdot.

‘Science Has a Nasty Photoshopping Problem’

Dr. Bik is a microbiologist who has worked at Stanford University and for the Dutch National Institute for Health who is “blessed” with “what I’m told is a better-than-average ability to spot repeating patterns,” according to their new Op-Ed in the New York Times.

In 2014 they’d spotted the same photo “being used in two different papers to represent results from three entirely different experiments….”

Although this was eight years ago, I distinctly recall how angry it made me. This was cheating, pure and simple. By editing an image to produce a desired result, a scientist can manufacture proof for a favored hypothesis, or create a signal out of noise. Scientists must rely on and build on one another’s work. Cheating is a transgression against everything that science should be. If scientific papers contain errors or — much worse — fraudulent data and fabricated imagery, other researchers are likely to waste time and grant money chasing theories based on made-up results…..

But were those duplicated images just an isolated case? With little clue about how big this would get, I began searching for suspicious figures in biomedical journals…. By day I went to my job in a lab at Stanford University, but I was soon spending every evening and most weekends looking for suspicious images. In 2016, I published an analysis of 20,621 peer-reviewed papers, discovering problematic images in no fewer than one in 25. Half of these appeared to have been manipulated deliberately — rotated, flipped, stretched or otherwise photoshopped. With a sense of unease about how much bad science might be in journals, I quit my full-time job in 2019 so that I could devote myself to finding and reporting more cases of scientific fraud.

Using my pattern-matching eyes and lots of caffeine, I have analyzed more than 100,000 papers since 2014 and found apparent image duplication in 4,800 and similar evidence of error, cheating or other ethical problems in an additional 1,700. I’ve reported 2,500 of these to their journals’ editors and — after learning the hard way that journals often do not respond to these cases — posted many of those papers along with 3,500 more to PubPeer, a website where scientific literature is discussed in public….

Unfortunately, many scientific journals and academic institutions are slow to respond to evidence of image manipulation — if they take action at all. So far, my work has resulted in 956 corrections and 923 retractions, but a majority of the papers I have reported to the journals remain unaddressed.

Manipulated images “raise questions about an entire line of research, which means potentially millions of dollars of wasted grant money and years of false hope for patients.” Part of the problem is that despite “peer review” at scientific journals, “peer review is unpaid and undervalued, and the system is based on a trusting, non-adversarial relationship. Peer review is not set up to detect fraud.”

But there’s other problems.

Most of my fellow detectives remain anonymous, operating under pseudonyms such as Smut Clyde or Cheshire. Criticizing other scientists’ work is often not well received, and concerns about negative career consequences can prevent scientists from speaking out. Image problems I have reported under my full name have resulted in hateful messages, angry videos on social media sites and two lawsuit threats….

Things could be about to get even worse. Artificial intelligence might help detect duplicated data in research, but it can also be used to generate fake data. It is easy nowadays to produce fabricated photos or videos of events that never happened, and A.I.-generated images might have already started to poison the scientific literature. As A.I. technology develops, it will become significantly harder to distinguish fake from real.

Science needs to get serious about research fraud.
Among their proposed solutions? “Journals should pay the data detectives who find fatal errors or misconduct in published papers, similar to how tech companies pay bounties to computer security experts who find bugs in software.”

Read more of this story at Slashdot.

A Space Rock Smashed Into Mars’ Equator – and Revealed Chunks of Ice

The mission of NASA’s robotic lander InSight “is nearing an end as dust obscures its solar panels,” reports CNN. “In a matter of weeks, the lander won’t be able to send a beep to show it’s OK anymore.”

“Before it bids farewell, though, the spacecraft still has some surprises in store.”
When Mars rumbled beneath InSight’s feet on December 24, NASA scientists thought it was just another marsquake. The magnitude 4 quake was actually caused by a space rock slamming into the Martian surface a couple thousand miles away. The meteoroid left quite a crater on the red planet, and it revealed glimmering chunks of ice in an entirely unexpected place — near the warm Martian equator.
The chunks of ice — the size of boulders — “were found buried closer to the warm Martian equator than any ice that has ever been detected on the planet,” CNN explained earlier this week. The article also adds that ice below the surface of Mars “could be used for drinking water, rocket propellant and even growing crops and plants by future astronauts. And the fact that the ice was found so near the equator, the warmest region on Mars, might make it an ideal place to land crewed missions to the red planet.”

Interestingly, they note that scientists only realized it was a meteoroid strike (and not an earthquake) when “Before and after photos captured from above by the Mars Reconnaissance Orbiter, which has been circling Mars since 2006, spotted a new crater this past February.” A crater that was 492 feet (150 meters) across and 70 feet (21 meters) deep…

When scientists connected the dots from both missions, they realized it was one of the largest meteoroid strikes on Mars since NASA began studying the red planet…. The journal Science published two new studies describing the impact and its effects on Thursday….

“The image of the impact was unlike any I had seen before, with the massive crater, the exposed ice, and the dramatic blast zone preserved in the Martian dust,” said Liliya Posiolova, orbital science operations lead for the orbiter at Malin Space Science Systems in San Diego, in a statement….

Researchers estimated the meteoroid, the name for a space rock before it hits the ground, was about 16 to 39 feet (5 to 12 meters). While this would have been small enough to burn up in Earth’s atmosphere, the same can’t be said for Mars, which has a thin atmosphere only 1% as dense as Earth’s…. Some of the material blasted out of the crater landed as far as 23 miles (37 kilometers) away.

Teams at NASA also captured sound from the impact, so you can listen to what it sounds like when a space rock hits Mars. The images captured by the orbiter, along with seismic data recorded by InSight, make the impact one of the largest craters in our solar system ever observed as it was created.

Read more of this story at Slashdot.

Developer Proposes New (and Compatible) ‘Extended Flavor’ of Go

While listening to a podcast about the Go programming language, backend architect Aviv Carmi heard some loose talk about forking the language to keep its original design while also allowing the evolution of an “extended flavor.”
If such a fork takes place, Carmi writes on Medium, he hopes the two languages could interact and share the same runtime environment, libraries, and ecosystem — citing lessons learned from the popularity of other language forks:
There are well-known, hugely successful precedents for such a move. Unarguably, the JVM ecosystem will last longer and keep on gaining popularity thanks to Scala and Kotlin (a decrease in Java’s popularity is overtaken by an increase in Scala’s, during the previous decade, and in Kotlin’s, during this one). All three languages contribute to a stronger, single community and gain stronger libraries and integrations. JavaScript has undoubtedly become stronger thanks to Typescript, which quickly became one of the world’s most popular languages itself. I also believe this is the right move for us Gophers…

Carmi applauds Go’s readability-over-writability culture, its consistent concurrency model (with lightweight threading), and its broad ecosystem of tools. But in a second essay Carmi lists his complaints — about Go’s lack of keyword-based visibility modifiers (like “public” and “private”), how any symbol declared in a file “is automatically visible to the entire package,” and Go’s abundance of global built-in symbols (which complicate the choice of possible variable names, but which can still be overriden, since they aren’t actually keywords). After a longer wishlist — including null-pointer safety features and improvements to error handling — Carmi introduces a third article with “A Proposition for a Better Future.”
I would have loved to see a compile time environment that mostly looks like Go, but allows developers to be a bit more expressive to gain maintainability and runtime safety. But at the same time, allow the Go language itself to largely remain the same and not evolve into something new, as a lot of us Gophers fear. As Gophers, why not have two tools in our tool set?

The essay proposes a new extended flavor of Go called Goat — a “new compile-time environment that will produce standard, compatible, and performant Go files that are fully compatible with any other Go project. This means they can import regular Go files but also be safely imported from any other Go file.”

“Goat implementation will most likely be delivered as a code generation tool or as a transpiler producing regular go files,” explains a page created for the project on GitHub. “However, full implementation details should be designed once the specification provided in this document is finalized.”

Carmi’s essay concludes, “I want to ignite a thorough discussion around the design and specification of Goat…. This project will allow Go to remain simple and efficient while allowing the community to experiment with an extended flavor. Goat spec should be driven by the community and so it needs the opinion and contribution of any Gopher and non-Gopher out there.”

“Come join the discussion, we need your input.”

Related link: Go principal engineer Russ Cox gave a talk at GopherCon 2022 that was all about compatibility and “the strategies Go uses to continue to evolve without breaking your programs.”

Read more of this story at Slashdot.

Computing Pioneer Who Invented the First Assembly Language Dies at Age 100

“Kathleen Booth, who has died aged 100, co-designed of one of the world’s first operational computers and wrote two of the earliest books on computer design and programming,” the Telegraph wrote this week.

“She was also credited with the invention of the first assembly language, a programming language designed to be readable by users.”
In 1946 she joined a team of mathematicians under Andrew Booth at Birkbeck College undertaking calculations for the scientists working on the X-ray crystallography images which contributed to the discovery of the double helix shape of DNA….

To help the number-crunching involved Booth had embarked on building a computing machine called the Automatic Relay Calculator or ARC, and in 1947 Kathleen accompanied him on a six-month visit to Princeton University, where they consulted John von Neumann, who had developed the idea of storing programs in a computer. On their return to England they co-wrote General Considerations in the Design of an All Purpose Electronic Digital Computer, and went on to make modifications to the original ARC to incorporate the lessons learnt.

Kathleen devised the ARC assembly language for the computer and designed the assembler.

In 1950 Kathleen took a PhD in applied mathematics and the same year she and Andrew Booth were married. In 1953 they cowrote Automatic Digital Calculators, which included the general principles involved in the new “Planning and Coding”programming style.

The Booths remained at Birkbeck until 1962 working on other computer designs including the All Purpose Electronic (X) Computer (Apexc, the forerunner of the ICT 1200 computer which became a bestseller in the 1960s), for which Kathleen published the seminal Programming for an Automatic Digital Calculator in 1958. The previous year she and her husband had co-founded the School of Computer Science and Information Systems at Birkbeck.
“The APE(X)C design was commercialized and sold as the HEC by the British Tabulating Machine Co Ltd, which eventually became ICL,” remembers the Register, sharing a 2010 video about the machine (along with several links for “Further Reading.”)

Read more of this story at Slashdot.