Google Has More Powerful AI, Says Engineer Fired Over Sentience Claims
In a new interview with Futurism, Blake Lemoine now says the “best way forward” for humankind’s future relationship with AI is “understanding that we are dealing with intelligent artifacts. There’s a chance that — and I believe it is the case — that they have feelings and they can suffer and they can experience joy, and humans should at least keep that in mind when interacting with them.” (Although earlier in the interview, Lemoine concedes “Is there a chance that people, myself included, are projecting properties onto these systems that they don’t have? Yes. But it’s not the same kind of thing as someone who’s talking to their doll.”)
But he also thinks there’s a lot of research happening inside corporations, adding that “The only thing that has changed from two years ago to now is that the fast movement is visible to the public.” For example, Lemoine says Google almost released its AI-powered Bard chatbot last fall, but “in part because of some of the safety concerns I raised, they deleted it… I don’t think they’re being pushed around by OpenAI. I think that’s just a media narrative. I think Google is going about doing things in what they believe is a safe and responsible manner, and OpenAI just happened to release something.”
“[Google] still has far more advanced technology that they haven’t made publicly available yet. Something that does more or less what Bard does could have been released over two years ago. They’ve had that technology for over two years. What they’ve spent the intervening two years doing is working on the safety of it — making sure that it doesn’t make things up too often, making sure that it doesn’t have racial or gender biases, or political biases, things like that. That’s what they spent those two years doing…
“And in those two years, it wasn’t like they weren’t inventing other things. There are plenty of other systems that give Google’s AI more capabilities, more features, make it smarter. The most sophisticated system I ever got to play with was heavily multimodal — not just incorporating images, but incorporating sounds, giving it access to the Google Books API, giving it access to essentially every API backend that Google had, and allowing it to just gain an understanding of all of it. That’s the one that I was like, “you know this thing, this thing’s awake.” And they haven’t let the public play with that one yet. But Bard is kind of a simplified version of that, so it still has a lot of the kind of liveliness of that model…
“[W]hat it comes down to is that we aren’t spending enough time on transparency or model understandability. I’m of the opinion that we could be using the scientific investigative tools that psychology has come up with to understand human cognition, both to understand existing AI systems and to develop ones that are more easily controllable and understandable.”
So how will AI and humans will coexist? “Over the past year, I’ve been leaning more and more towards we’re not ready for this, as people,” Lemoine says toward the end of the interview. “We have not yet sufficiently answered questions about human rights — throwing nonhuman entities into the mix needlessly complicates things at this point in history.”
Read more of this story at Slashdot.