The Godot Game Engine Now Has Its Own Foundation
More details can be found via the Godot Engine blog.
Read more of this story at Slashdot.
Sales And Repair
1715 S. 3rd Ave. Suite #1
Yakima, WA. 98902
Mon - Fri: 8:30-5:30
Sat - Sun: Closed
Sales And Repair
1715 S. 3rd Ave. Suite #1
Yakima, WA. 98902
Mon - Fri: 8:30-5:30
Sat - Sun: Closed
More details can be found via the Godot Engine blog.
Read more of this story at Slashdot.
Meta describes its method as a three-part system trained to compress audio to a desired target size. First, the encoder transforms uncompressed data into a lower frame rate “latent space” representation. The “quantizer” then compresses the representation to the target size while keeping track of the most important information that will later be used to rebuild the original signal. (This compressed signal is what gets sent through a network or saved to disk.) Finally, the decoder turns the compressed data back into audio in real time using a neural network on a single CPU.
Meta’s use of discriminators proves key to creating a method for compressing the audio as much as possible without losing key elements of a signal that make it distinctive and recognizable: “The key to lossy compression is to identify changes that will not be perceivable by humans, as perfect reconstruction is impossible at low bit rates. To do so, we use discriminators to improve the perceptual quality of the generated samples. This creates a cat-and-mouse game where the discriminator’s job is to differentiate between real samples and reconstructed samples. The compression model attempts to generate samples to fool the discriminators by pushing the reconstructed samples to be more perceptually similar to the original samples.”
Read more of this story at Slashdot.
A help center post and the community guidelines offer a little more detail. They say that “text, images, and videos that contain nudity, offensive language, sexual themes, or mature subject matter” is allowed on Tumblr, but “visual depictions of sexually explicit acts (or content with an overt focus on genitalia)” aren’t. There’s an exception for “historically significant art that you may find in a mainstream museum and which depicts sex acts — such as from India’s Sunga Empire,” although it must be labeled with a mature content or “sexual themes” tag so that users can filter it from their dashboards.
“Nudity and other kinds of adult material are generally welcome. We’re not here to judge your art, we just ask that you add a Community Label to your mature content so that people can choose to filter it out of their Dashboard if they prefer,” say the community guidelines. However, users can’t post links or ads to “adult-oriented affiliate networks,” they can’t advertise “escort or erotic services,” and they can’t post content that “promotes pedophilia,” including “sexually suggestive” content with images of children. On December 17th, 2018, Tumblr permanently banned adult content from its platform. The site was owned by Verizon at the time and later sold to WordPress.com owner Automattic, which largely maintained the ban “in large part because internet infrastructure services — like payment processors and Apple’s iOS App Store — typically frown on explicit adult content,” reports The Verge.
Read more of this story at Slashdot.
How it works: Companies with more than four employees must post a salary range for any open role that’s performed in the city — or could be performed in the city. Violators could ultimately be fined up to $250,000 — though a first offense just gets a warning.
Reality check: It’s a pretty squishy requirement. The law requires only that salary ranges be in “good faith” — and there’s no penalty for paying someone outside of the range posted. It will be difficult for enforcement officials to prove a salary range is in bad faith, Bloom says. “The low-hanging fruit will be [going after] employers that don’t post any range whatsoever.” Many of the ranges posted online now are pretty wide. A senior analyst role advertised on the Macy’s jobs site is listed as paying between $85,320 and $142,080 a year. A senior podcast producer role at the WSJ advertises an “NYC pay range” of $50,000 – $180,000. The wide ranges could be particularly reasonable if these roles can be performed remotely, as some companies adjust pay according to location.
Read more of this story at Slashdot.
The study found that on average, travel times for car trips in Atlanta during evening hours increased between 9.9-10.7% immediately following the ban on shared micromobility. For an average commuter in Atlanta, that translated to an extra 2-5 minutes per evening trip. The authors also concluded that the impact on commute times would likely be higher in other cities across the country. According the study, “based on the estimated US average commute time of 27.6 minutes in 2019, the results from our natural experiment imply a 17.4% increase in travel time nationally.”
The study went on to consider the economic impact of that added congestion and increased travel time. […] The economic impact on the city of Atlanta was calculated at US $4.9 million. The study estimated this impact on the national level could be in the range of US $408M to $573 million. Interestingly, the entirety of the study’s data comes from before the COVID-19 pandemic, which played a major role in promoting the use of shared micromobility. A similar study performed today could find an even greater impact on congestion, travel times, and economic impact on cities.
Read more of this story at Slashdot.
Kwon’s TerraUSD algorithmic stablecoin and sister token Luna suffered a $60 billion wipeout in May as confidence in the project evaporated, exacerbating this year’s crypto meltdown. Hodlnaut’s Hong Kong arm made the near $190 million loss when it offloaded the stablecoin as its claimed dollar peg frayed. In a letter dated July 21, Hodlnaut’s directors “made an about-turn” about the impact and informed a Singapore police department that digital assets had been converted to TerraUSD, according to the report. Much of the latter was lent out on the Anchor Protocol, the report said, a decentralized finance platform developed on the Terra blockchain.
Hodlnaut, which operates out of Singapore and Hong Kong, halted withdrawals in August. The judicial report said more than 1,000 deleted documents from Hodlnaut’s Google workspace could have helped shed light on the business. The judicial managers haven’t been able to obtain several “key documents” in relation to Hodlnaut’s Hong Kong arm, which owes $58.3 million to Hodlnaut Pte in Singapore. About S$776,292 appeared to have been withdrawn by some employees between July and when withdrawals were halted in August, the report stated. Most of the company’s investments into DeFi were made via the Hong Kong division, it added.
Read more of this story at Slashdot.
Now this evening is a comment from a Google engineer on the Chromium JPEG-XL issue tracker with their expressed reasons: “Thank you everyone for your comments and feedback regarding JPEG XL. We will be removing the JPEG XL code and flag from Chromium for the following reasons:
– Experimental flags and code should not remain indefinitely
– There is not enough interest from the entire ecosystem to continue experimenting with JPEG XL
– The new image format does not bring sufficient incremental benefits over existing formats to warrant enabling it by default
– By removing the flag and the code in M110, it reduces the maintenance burden and allows us to focus on improving existing formats in Chrome”
Read more of this story at Slashdot.
Sometimes, it does so on the street, but more often, it happens when the owner is recharging the lithium ion battery. A mismatched charger won’t always turn off automatically when the battery’s fully charged, and keeps heating up. Or, the highly flammable electrolyte inside the battery’s cells leaks out of its casing and ignites, setting off a chain reaction.
“These bikes when they fail, they fail like a blowtorch,” said Dan Flynn, the chief fire marshal at the New York Fire Department. “We’ve seen incidents where people have described them as explosive — incidents where they actually have so much power, they’re actually blowing walls down in between rooms and apartments.”
And these fires are getting more frequent.
As of Friday, the FDNY investigated 174 battery fires, putting 2022 on track to double the number of fires that occurred last year (104) and quadruple the number from 2020 (44). So far this year, six people have died in e-bike-related fires and 93 people were injured, up from four deaths and 79 injuries last year.
Read more of this story at Slashdot.
But is that always good? “Some social scientists believe teaching kids that literally everyone in the world they hadn’t met is dangerous may have been actively harmful.”
For several years, I researched why we don’t talk to strangers and what happens when we do for my book, The Power of Strangers: The Benefits of Connecting in a Suspicious World. This effort put me in the company of anthropologists, psychologists, sociologists, political scientists, archeologists, urban designers, activists, philosophers, and theologians, plus hundreds of random strangers I talked to wherever I went. What I learned was this: we miss a lot by being afraid of strangers. Talking to strangers — under the right conditions — is good for us, good for our neighborhoods, our towns and cities, our nations, and our world. Talking to strangers can teach you things, deepen you, make you a better citizen, a better thinker, and a better person.
It’s a good way to live. But it’s more than that. In a rapidly changing, infinitely complex, furiously polarised world, it’s a way to survive….
Talking to strangers can also make us wiser, more worldly, and more empathetic, says Harvard University professor and MacArthur “genius grant” recipient, Danielle Allen. When she was teaching at the University of Chicago, Allen was repeatedly warned by colleagues to stay away from the poorer side of town. She believes that this “fear of strangers was actually eroding a lot of [her peers’] intellectual and social capacities”. She declined to stay away, and did some of her most admired work in those neighbourhoods. She has since devoted her career to fostering connections between people and groups that otherwise would not interact. “Real knowledge of what’s outside one’s garden cures fear,” Allen writes, “but only by talking to strangers can we come by such knowledge.”
By talking to strangers, you get a glimpse of the mind-boggling complexity of the human species, and the infinite variety of human experiences. It’s a cliché, but you get to see the world from the eyes of another, without which wisdom is impossible…. When these interactions go well — and they generally do — the positive perception of the stranger can generalise into better feelings about people. For me — and many of the respected experts and complete strangers I’ve spoken to — it comes down to a question of data. If I based all my perceptions of humanity on what is available through my phone or laptop, I would have a fantastically negative view of most other people.
Read more of this story at Slashdot.
In 2014 they’d spotted the same photo “being used in two different papers to represent results from three entirely different experiments….”
Although this was eight years ago, I distinctly recall how angry it made me. This was cheating, pure and simple. By editing an image to produce a desired result, a scientist can manufacture proof for a favored hypothesis, or create a signal out of noise. Scientists must rely on and build on one another’s work. Cheating is a transgression against everything that science should be. If scientific papers contain errors or — much worse — fraudulent data and fabricated imagery, other researchers are likely to waste time and grant money chasing theories based on made-up results…..
But were those duplicated images just an isolated case? With little clue about how big this would get, I began searching for suspicious figures in biomedical journals…. By day I went to my job in a lab at Stanford University, but I was soon spending every evening and most weekends looking for suspicious images. In 2016, I published an analysis of 20,621 peer-reviewed papers, discovering problematic images in no fewer than one in 25. Half of these appeared to have been manipulated deliberately — rotated, flipped, stretched or otherwise photoshopped. With a sense of unease about how much bad science might be in journals, I quit my full-time job in 2019 so that I could devote myself to finding and reporting more cases of scientific fraud.
Using my pattern-matching eyes and lots of caffeine, I have analyzed more than 100,000 papers since 2014 and found apparent image duplication in 4,800 and similar evidence of error, cheating or other ethical problems in an additional 1,700. I’ve reported 2,500 of these to their journals’ editors and — after learning the hard way that journals often do not respond to these cases — posted many of those papers along with 3,500 more to PubPeer, a website where scientific literature is discussed in public….
Unfortunately, many scientific journals and academic institutions are slow to respond to evidence of image manipulation — if they take action at all. So far, my work has resulted in 956 corrections and 923 retractions, but a majority of the papers I have reported to the journals remain unaddressed.
Manipulated images “raise questions about an entire line of research, which means potentially millions of dollars of wasted grant money and years of false hope for patients.” Part of the problem is that despite “peer review” at scientific journals, “peer review is unpaid and undervalued, and the system is based on a trusting, non-adversarial relationship. Peer review is not set up to detect fraud.”
But there’s other problems.
Most of my fellow detectives remain anonymous, operating under pseudonyms such as Smut Clyde or Cheshire. Criticizing other scientists’ work is often not well received, and concerns about negative career consequences can prevent scientists from speaking out. Image problems I have reported under my full name have resulted in hateful messages, angry videos on social media sites and two lawsuit threats….
Things could be about to get even worse. Artificial intelligence might help detect duplicated data in research, but it can also be used to generate fake data. It is easy nowadays to produce fabricated photos or videos of events that never happened, and A.I.-generated images might have already started to poison the scientific literature. As A.I. technology develops, it will become significantly harder to distinguish fake from real.
Science needs to get serious about research fraud.
Among their proposed solutions? “Journals should pay the data detectives who find fatal errors or misconduct in published papers, similar to how tech companies pay bounties to computer security experts who find bugs in software.”
Read more of this story at Slashdot.