How Mem Plans To Reinvent Note-Taking Apps With AI

David Pierce writes via The Verge: In the summer of 2019, Kevin Moody and Dennis Xu started meeting with investors to pitch their new app. They had this big idea about reshaping the way users’ personal information moves around the internet, coalescing all their data into a single tool in a way that could actually work for them. But they quickly ran into a problem: all of their mock-ups and descriptions made it seem like they were building a note-taking app. And even in those hazy early days of product development — before they had a prototype, a design, even a name — they were crystal clear that this would not be a note-taking app. Instead, the founders wanted to create something much bigger. It would encompass all of your notes but also your interests, your viewing history, your works-in-progress. “Imagine if you had a Google search bar but for all nonpublic information,” Xu says. “For every piece of information that was uniquely relevant to you.”

That’s what Moody and Xu were actually trying to build. So they kept tweaking the approach until it made sense. At one point, their app was going to be called NSFW, a half-joke that stood for “Notes and Search for Work,” and for a while, it was called Supernote. But after a few meetings and months, they eventually landed on the name “Mem.” Like Memex, a long-imagined device that humans could use to store their entire memory. Or like, well, memory. Either way, it’s not a note-taking app. It’s more like a protocol for private information, a way to pipe in everything that matters to you — your email, your calendar events, your airline confirmations, your meeting notes, that idea you had on the train this morning — and then automatically organize and make sense of it all. More importantly, it’s meant to use cutting-edge AI to give all that information back to you at exactly the right time and in exactly the right place. […]

So far, Mem is mostly a note-taking app. It’s blisteringly fast and deliberately sparse — mostly just a timeline of every mem (the company’s parlance for an individual note) you’ve ever created or viewed, with a few simple ways to categorize and organize them. It does tasks and tags, but a full-featured project manager or Second Brain system this is not. But if you look carefully, the app already contains a few signs of where Mem is headed: a tool called Writer that can actually generate information for you, based on both its knowledge of the public internet and your personal information; AI features that summarize tweet threads for you; a sidebar that automatically displays mems related to what you’re working on. All this still barely scratches the surface of what Mem wants to do and will need to do to be more than a note-taking app…

Read more of this story at Slashdot.

Google’s Text-To-Image AI Model Imagen Is Getting Its First (Very Limited) Public Outing

Google’s text-to-image AI system, Imagen, is being added to the company’s AI Test Kitchen app as a way to collect early feedback on the technology. The Verge reports: AI Test Kitchen was launched earlier this year as a way for Google to beta test various AI systems. Currently, the app offers a few different ways to interact with Google’s text model LaMDA (yes, the same one that the engineer thought was sentient), and the company will soon be adding similarly constrained Imagen requests as part of what it calls a “season two” update to the app. In short, there’ll be two ways to interact with Imagen, which Google demoed to The Verge ahead of the announcement today: “City Dreamer” and “Wobble.”

In City Dreamer, users can ask the model to generate elements from a city designed around a theme of their choice — say, pumpkins, denim, or the color blerg. Imagen creates sample buildings and plots (a town square, an apartment block, an airport, and so on), with all the designs appearing as isometric models similar to what you’d see in SimCity. In Wobble, you create a little monster. You can choose what it’s made out of (clay, felt, marzipan, rubber) and then dress it in the clothing of your choice. The model generates your monster, gives it a name, and then you can sort of poke and prod the thing to make it “dance.” Again, the model’s output is constrained to a very specific aesthetic, which, to my mind, looks like a cross between Pixar’s designs for Monsters, Inc. and the character creator feature in Spore. (Someone on the AI team must be a Will Wright fan.) These interactions are extremely constrained compared to other text-to-image models, and users can’t just request anything they’d like. That’s intentional on Google’s part, though. As Josh Woodward, senior director of product management at Google, explained to The Verge, the whole point of AI Test Kitchen is to a) get feedback from the public on these AI systems and b) find out more about how people will break them.

Google wouldn’t share any data on how many people are actually using AI Test Kitchen (“We didn’t set out to make this a billion user Google app,” says Woodward) but says the feedback it’s getting is invaluable. “Engagement is way above our expectations,” says Woodward. “It’s a very active, opinionated group of users.” He notes the app has been useful in reaching “certain types of folks — researchers, policymakers” who can use it to better understand the limitations and capabilities of state-of-the-art AI models. Still, the big question is whether Google will want to push these models to a wider public and, if so, what form will that take? Already, the company’s rivals, OpenAI and Stability AI, are rushing to commercialize text-to-image models. Will Google ever feel its systems are safe enough to take out of the AI Test Kitchen and serve up to its users?

Read more of this story at Slashdot.

Bruce Willis Denies Selling Rights To His Face

Last week, a number of outlets reported that Bruce Willis sold his face to a deepfake company called Deepcake, allowing a “digital twin” of himself to be created for use on screen. The only problem is that it’s apparently not true. According to the BBC, the actor’s agent said that he had “no partnership or agreement” with the company and a representative of Deepcake said only Willis had the rights to his face From the report: On 27 September, the Daily Mail reported that a deal had been struck between Willis and Deepcake. “Two-time Emmy winner Bruce Willis can still appear in movies after selling his image rights to Deepcake,” the story reads. The story was picked up by the Telegraph and a series of other media outlets. “Bruce Willis has become the first Hollywood star to sell his rights to allow a ‘digital twin’ of himself to be created for use on screen.” said the Telegraph. But that doesn’t appear to be the case.

What is true is that a deepfake of Bruce Willis was used to create an advert for Megafon, a Russian telecoms company, last year. The tech used in the advert was created by Deepcake, which describes itself as an AI company specializing in deepfakes. Deepcake told the BBC it had worked closely with Willis’ team on the advert. “What he definitely did is that he gave us his consent (and a lot of materials) to make his Digital Twin,” they said. The company says it has a unique library of high-resolution celebrities, influencers and historical figures. On its website, Deepcake promotes its work with an apparent quote from Mr Willis: “I liked the precision of my character. It’s a great opportunity for me to go back in time. “The neural network was trained on content of Die Hard and Fifth Element, so my character is similar to the images of that time.” A representative from Deepcake said in a statement: “The wording about rights is wrong… Bruce couldn’t sell anyone any rights, they are his by default.”

Read more of this story at Slashdot.

Elon Musk Unveils Prototype of Humanoid Optimus Robot

Tesla CEO Elon Musk revealed a prototype of a humanoid robot that he said utilizes the company’s AI software, as well as the sensors that power its advanced driver assist features. The Verge reports: The robot was showcased at Tesla’s AI Day, and reps said it features the same technology used to enable the Full Self-Driving beta in Tesla’s cars. According to Musk, it can do more than what has been shown, but “the first time it walked without a tether was tonight on stage.” Musk said they’re targeting a price of “probably less than $20,000.” The back doors of the stage open to reveal a deconstructed Optimus that walked forward and did a “raise the roof” dance move. Musk would admit after the motion that they wanted to keep it safe and not make too many moves on stage and have it “fall flat on its face.” “It’ll be a fundamental transformation for civilization as we know it.” said Musk.

Afterward, the company showed a few video clips of the robot doing other tasks like picking up boxes. Then Tesla’s team brought out another prototype that has its body fully assembled but not fully functional. […] Future applications could include cooking, gardening, or even “catgirl” sex partners, Musk has said, while also claiming that production could start as soon as next year. Musk says the robot is “the most important product development we’re doing this year,” predicting that it will have the potential to be “more significant than the vehicle business over time.”

Musk first announced the “Tesla Bot” at last year’s AI Day.

Read more of this story at Slashdot.

Shutterstock Is Removing AI-Generated Images

Shutterstock appears to be removing images generated by AI systems like DALL-E and Midjourney. Motherboard reports: On Shutterstock, searches for images tagged “Midjourney” yielded several photos with the AI tool’s unmistakable aesthetic, with many having high popularity scores and marked as “frequently used.” But late Monday, the results for “Midjourney” seem to have been reduced, leaving mainly stock photos of the tool’s logo. Other images use tags like “AI generated” — one image, for example, is an illustration of a futuristic building with an image description reading “Ai generated illustration of futuristic Art Deco city, vintage image, retro poster.” The image is part of a collection the artist titled “Midjourney,” which has since been removed from the site. Other images marked “AI generated,” like this burning medieval castle, seem to remain up on the site.

As Ars Technica notes, neither Shutterstock nor Getty Images explicitly prohibits AI-generated images in their terms of service, and Shutterstock users typically make around 15 to 40 percent of what the company makes when it sells an image. Some creators have not taken kindly to this trend, pointing out that these systems use massive datasets of images scraped from the web. […] In other words, the generated works are the result of an algorithmic process which mines original art from the internet without credit or compensation to the original artists. Others have worried about the impacts on independent artists who work for commissions, since the ability for anyone to create custom generated artwork potentially means lost revenue.

Read more of this story at Slashdot.

When AI Asks Dumb Questions, It Gets Smart Fast

sciencehabit shares a report from Science Magazine: If someone showed you a photo of a crocodile and asked whether it was a bird, you might laugh — and then, if you were patient and kind, help them identify the animal. Such real-world, and sometimes dumb, interactions may be key to helping artificial intelligence learn, according to a new study in which the strategy dramatically improved an AI’s accuracy at interpreting novel images. The approach could help AI researchers more quickly design programs that do everything from diagnose disease to direct robots or other devices around homes on their own.

It’s important to think about how AI presents itself, says Kurt Gray, a social psychologist at the University of North Carolina, Chapel Hill, who has studied human-AI interaction but was not involved in the work. “In this case, you want it to be kind of like a kid, right?” he says. Otherwise, people might think you’re a troll for asking seemingly ridiculous questions. The team “rewarded” its AI for writing intelligible questions: When people actually responded to a query, the system received feedback telling it to adjust its inner workings so as to behave similarly in the future. Over time, the AI implicitly picked up lessons in language and social norms, honing its ability to ask questions that were sensical and easily answerable.

The new AI has several components, some of them neural networks, complex mathematical functions inspired by the brain’s architecture. “There are many moving pieces […] that all need to play together,” Krishna says. One component selected an image on Instagram — say a sunset — and a second asked a question about that image — for example, “Is this photo taken at night?” Additional components extracted facts from reader responses and learned about images from them. Across 8 months and more than 200,000 questions on Instagram, the system’s accuracy at answering questions similar to those it had posed increased 118%, the team reports today in the Proceedings of the National Academy of Sciences. A comparison system that posted questions on Instagram but was not explicitly trained to maximize response rates improved its accuracy only 72%, in part because people more frequently ignored it. The main innovation, Jaques says, was rewarding the system for getting humans to respond, “which is not that crazy from a technical perspective, but very important from a research-direction perspective.” She’s also impressed by the large-scale, real-world deployment on Instagram.

Read more of this story at Slashdot.

Will AI Really Perform Transformative, Disruptive Miracles?

“An encounter with the superhuman is at hand,” argues Canadian novelist, essayist, and cultural commentator Stephen Marche in an article in the Atlantic titled “Of Gods and Machines”. He argues that GPT-3’s 175 billion parameters give it interpretive power “far beyond human understanding, far beyond what our little animal brains can comprehend. Machine learning has capacities that are real, but which transcend human understanding: the definition of magic.”

But despite being a technology where inscrutability “is an industrial by-product of the process,” we may still not see what’s coming, Marche argue — that AI is “every bit as important and transformative as the other great tech disruptions, but more obscure, tucked largely out of view.”
Science fiction, and our own imagination, add to the confusion. We just can’t help thinking of AI in terms of the technologies depicted in Ex Machina, Her, or Blade Runner — people-machines that remain pure fantasy. Then there’s the distortion of Silicon Valley hype, the general fake-it-’til-you-make-it atmosphere that gave the world WeWork and Theranos: People who want to sound cutting-edge end up calling any automated process “artificial intelligence.” And at the bottom of all of this bewilderment sits the mystery inherent to the technology itself, its direct thrust at the unfathomable. The most advanced NLP programs operate at a level that not even the engineers constructing them fully understand.

But the confusion surrounding the miracles of AI doesn’t mean that the miracles aren’t happening. It just means that they won’t look how anybody has imagined them. Arthur C. Clarke famously said that “technology sufficiently advanced is indistinguishable from magic.” Magic is coming, and it’s coming for all of us….

And if AI harnesses the power promised by quantum computing, everything I’m describing here would be the first dulcet breezes of a hurricane. Ersatz humans are going to be one of the least interesting aspects of the new technology. This is not an inhuman intelligence but an inhuman capacity for digital intelligence. An artificial general intelligence will probably look more like a whole series of exponentially improving tools than a single thing. It will be a whole series of increasingly powerful and semi-invisible assistants, a whole series of increasingly powerful and semi-invisible surveillance states, a whole series of increasingly powerful and semi-invisible weapons systems. The world would change; we shouldn’t expect it to change in any kind of way that you would recognize.

Our AI future will be weird and sublime and perhaps we won’t even notice it happening to us. The paragraph above was composed by GPT-3. I wrote up to “And if AI harnesses the power promised by quantum computing”; machines did the rest.

Stephen Hawking once said that “the development of full artificial intelligence could spell the end of the human race.” Experts in AI, even the men and women building it, commonly describe the technology as an existential threat. But we are shockingly bad at predicting the long-term effects of technology. (Remember when everybody believed that the internet was going to improve the quality of information in the world?) So perhaps, in the case of artificial intelligence, fear is as misplaced as that earlier optimism was.

AI is not the beginning of the world, nor the end. It’s a continuation. The imagination tends to be utopian or dystopian, but the future is human — an extension of what we already are…. Artificial intelligence is returning us, through the most advanced technology, to somewhere primitive, original: an encounter with the permanent incompleteness of consciousness…. They will do things we never thought possible, and sooner than we think. They will give answers that we ourselves could never have provided.

But they will also reveal that our understanding, no matter how great, is always and forever negligible. Our role is not to answer but to question, and to let our questioning run headlong, reckless, into the inarticulate.

Read more of this story at Slashdot.

Meta AI and Wikimedia Foundation Build an ML-Powered, Citation-Checking Bot

Digital Trends reports:

Working with the Wikimedia Foundation, Meta AI (that’s the AI research and development research lab for the social media giant) has developed what it claims is the first machine learning model able to automatically scan hundreds of thousands of citations at once to check if they support the corresponding claims….

“I think we were driven by curiosity at the end of the day,” Fabio Petroni, research tech lead manager for the FAIR (Fundamental AI Research) team of Meta AI, told Digital Trends. “We wanted to see what was the limit of this technology. We were absolutely not sure if [this AI] could do anything meaningful in this context. No one had ever tried to do something similar [before].”

Trained using a dataset consisting of 4 million Wikipedia citations, Meta’s new tool is able to effectively analyze the information linked to a citation and then cross-reference it with the supporting evidence…. Just as impressive as the ability to spot fraudulent citations, however, is the tool’s potential for suggesting better references. Deployed as a production model, this tool could helpfully suggest references that would best illustrate a certain point. While Petroni balks at it being likened to a factual spellcheck, flagging errors and suggesting improvements, that’s an easy way to think about what it might do.

Read more of this story at Slashdot.

Amazon Extends Alexa To Enable Ambient Intelligence

Sean Michael Kerner writes via VentureBeat: Amazon’s Alexa voice assistant technology isn’t just about natural language processing (NLP) anymore, now it has become a platform that’s aiming for ambient intelligence. At Amazon’s Alexa Live 2022 event today, the company announced a series of updates and outlined its general strategy for enabling ambient intelligence that will help transform how users in all types of different settings will interact with technology and benefit from artificial intelligence (AI). Among the announcements made at the event is the new Alexa Voice Service (AVS) SDK 3.0 to help developers build voice services, and new tools including the Alexa Routines Kit to support development of multistep routines that can be executed via voice. The concept of ambient intelligence is about having technology available when users need it and without the need for users to learn how to operate a service.

“One of the hallmarks of ambient intelligence is that it’s proactive,” [Aaron Rubenson, VP of Amazon Alexa] said. “Today, more than 30% of smart home interactions are initially initiated by Alexa without customers saying anything.” To further support the development of proactive capabilities, Amazon is now rolling out its Alexa Routines Kit. The new kit enables Alexa skills developers to preconfigure contextually relevant routines, and then offer them to customers when they’re actually using the relevant skill. One example cited by Rubenson of how routines work is in the automotive industry. He said that Jaguar Land Rover is using the Alexa Routines Kit to create a routine they call good night, which will automatically lock the doors, provide a notification of the fuel level or the charge level of the car and then turn on guardian mode, which checks for unauthorized activity.

As part of the Alexa Live event, Amazon is also rolling out a series of efforts to help developers build better skills, and make more money doing it. The new Skill Developer Accelerator Program (SDAP) is an effort to reward custom skill developers for taking certain actions that Amazon knows results in higher quality skills based on historical data. Rubenson said that the program will include monetary incentives and also incentives in the forms of promotional credits for developers that take these actions. There is also a Skills Quality Coach that will analyze skills individually, assign a skill quality score, and then provide individualized recommendations to the developer about how to improve that skill.

Read more of this story at Slashdot.

‘I’m CEO of a Robotics Company, and I Believe AI’s Failed on Many Fronts’

“Aside from drawing photo-realistic images and holding seemingly sentient conversations, AI has failed on many promises,” writes the cofounder and CEO of Serve Robotics:
The resulting rise in AI skepticism leaves us with a choice: We can become too cynical and watch from the sidelines as winners emerge, or find a way to filter noise and identify commercial breakthroughs early to participate in a historic economic opportunity. There’s a simple framework for differentiating near-term reality from science fiction. We use the single most important measure of maturity in any technology: its ability to manage unforeseen events commonly known as edge cases. As a technology hardens, it becomes more adept at handling increasingly infrequent edge cases and, as a result, gradually unlocking new applications…

Here’s an important insight: Today’s AI can achieve very high performance if it is focused on either precision, or recall. In other words, it optimizes one at the expense of the other (i.e., fewer false positives in exchange for more false negatives, and vice versa). But when it comes to achieving high performance on both of those simultaneously, AI models struggle. Solving this remains the holy grail of AI….

Delivery Autonomous Mobile Robots (AMRs) are the first application of urban autonomy to commercialize, while robo-taxis still await an unattainable hi-fi AI performance. The rate of progress in this industry, as well as our experience over the past five years, has strengthened our view that the best way to commercialize AI is to focus on narrower applications enabled by lo-fi AI, and use human intervention to achieve hi-fi performance when needed. In this model, lo-fi AI leads to early commercialization, and incremental improvements afterwards help drive business KPIs.

By targeting more forgiving use cases, businesses can use lo-fi AI to achieve commercial success early, while maintaining a realistic view of the multi-year timeline for achieving hi-fi capabilities.
After all, sci-fi has no place in business planning.

Read more of this story at Slashdot.