Inaccurate answers with alarming confidence (that’s right, it’s an LLM news roundup)
Apr. 15th, 2025 08:40 pm![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
January: “AI cannot even retrieve information accurately, and that there’s a fundamental limit to the technology’s capabilities. These models are often primed to be agreeable and helpful. They usually won’t bother correcting users’ assumptions, and will side with them instead. If chatbots are asked to generate a list of cases in support of some legal argument, for example, they are more predisposed to make up lawsuits than to respond with nothing.”
February: “Are these cookbooks written or reviewed by a dietitian or medical professional? Could a gastric bypass or cancer patient receive cooking instructions to make a meal contraindicated for their medical condition? If I were choosing for a library, I’d vet each one. With Hoopla, they are all there. Some might be excellent. Some might be dangerous.”
March: “Over the past few months, instead of working on our priorities at SourceHut, I have spent anywhere from 20-100% of my time in any given week mitigating hyper-aggressive LLM crawlers at scale. This isn’t the first time SourceHut has been at the wrong end of some malicious bullshit or paid someone else’s externalized costs – every couple of years someone invents a new way of ruining my day.”
“Most of the tools we tested presented inaccurate answers with alarming confidence, rarely using qualifying phrases such as “it appears,” “it’s possible,” “might,” etc., or acknowledging knowledge gaps with statements like “I couldn’t locate the exact article.” ChatGPT, for instance, incorrectly identified 134 articles, but signaled a lack of confidence just fifteen times out of its two hundred responses, and never declined to provide an answer.”
“ChatGPT responded with outputs falsely claiming that he was sentenced to 21 years in prison as “a convicted criminal who murdered two of his children and attempted to murder his third son,” a Noyb press release said. ChatGPT’s “made-up horror story” not only hallucinated events that never happened, but it also mixed “clearly identifiable personal data”—such as the actual number and gender of Holmen’s children and the name of his hometown.”
“Amazon says that the recordings your Echo will send to its data-centers will be deleted as soon as it’s been processed by the AI servers. Amazon’s made these claims before, and they were lies. Amazon eventually had to admit that its employees and a menagerie of overseas contractors were secretly given millions of recordings to listen to and make notes on.”
“eBay have changed their terms of service and you’re automatically opted-in for your personal data to be used for AI development and training.” (With opt-out instructions.)