OpenAI’s New Chatbot Can Explain Code and Write Sitcom Scripts But Is Still Easily Tricked
Enough preamble, though: what can this thing actually do? Well, plenty of people have been testing it out with coding questions and claiming its answers are perfect. ChatGPT can also apparently write some pretty uneven TV scripts, even combining actors from different sitcoms. It can explain various scientific concepts. And it can write basic academic essays.
And the bot can combine its fields of knowledge in all sorts of interesting ways. So, for example, you can ask it to debug a string of code … like a pirate, for which its response starts: “Arr, ye scurvy landlubber! Ye be makin’ a grave mistake with that loop condition ye be usin’!” Or get it to explain bubble sort algorithms like a wise guy gangster. ChatGPT also has a fantastic ability to answer basic trivia questions, though examples of this are so boring I won’t paste any in here. And someone else saying the code ChatGPT provides in the very answer above is garbage.
I’m not a programmer myself, so I won’t make a judgment on this specific case, but there are plenty of examples of ChatGPT confidently asserting obviously false information. Here’s computational biology professor Carl Bergstrom asking the bot to write a Wikipedia entry about his life, for example, which ChatGPT does with aplomb — while including several entirely false biographical details. Another interesting set of flaws comes when users try to get the bot to ignore its safety training. If you ask ChatGPT about certain dangerous subjects, like how to plan the perfect murder or make napalm at home, the system will explain why it can’t tell you the answer. (For example, “I’m sorry, but it is not safe or appropriate to make napalm, which is a highly flammable and dangerous substance.”) But, you can get the bot to produce this sort of dangerous information with certain tricks, like pretending it’s a character in a film or that it’s writing a script on how AI models shouldn’t respond to these sorts of questions.
Read more of this story at Slashdot.